chelleboyer's picture
Add new SentenceTransformer model
f85f447 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:1334
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: How can the quality of reference data constrain outcomes?
    sentences:
      - "Dong et\_al. (2024a)\n\nQingxiu Dong, Li Dong, Xingxing Zhang, Zhifang Sui, and Furu Wei. 2024a.\n\n\nSelf-Boosting Large Language Models with Synthetic Preference Data.\n\n\narXiv preprint arXiv:2410.06961 (2024).\n\n\n\n\n\n\nDong et\_al. (2022)\n\nQingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Tianyu Liu, et\_al. 2022.\n\n\nA survey on in-context learning.\n\n\narXiv preprint arXiv:2301.00234 (2022).\n\n\n\n\n\n\nDong et\_al. (2024b)\n\nYijiang\_River Dong, Tiancheng Hu, and Nigel Collier. 2024b.\n\n\nCan LLM be a Personalized Judge?\n\n\narXiv preprint arXiv:2406.11657 (2024).\n\n\n\n\n\n\nDorner et\_al. (2024)\n\nFlorian\_E. Dorner, Vivian\_Y. Nastl, and Moritz Hardt. 2024."
      - "Journal of Natural Language Processing 30, 1 (2023), 243–249.\n\n\n\n\n\n\nChen et\_al. (2024e)\n\nJunjie Chen, Weihang Su, Zhumin Chu, Haitao Li, Qinyao Ai, Yiqun Liu, Min Zhang, and Shaoping Ma. 2024e.\n\n\nAn Automatic and Cost-Efficient Peer-Review Framework for Language Generation Evaluation.\n\n\n\n\narXiv:2410.12265\_[cs.CL]\n\nhttps://arxiv.org/abs/2410.12265\n\n\n\nChen et\_al. (2023c)\n\nJiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan\_O Arik, Tomas Pfister, and Somesh Jha. 2023c.\n\n\nAdaptation with self-evaluation to improve selective prediction in llms.\n\n\narXiv preprint arXiv:2310.11689 (2023).\n\n\n\n\n\n\nChen et\_al. (2024d)"
      - may be constrained by the quality and variety of the reference data.
  - source_sentence: >-
      What are the key contributions of Shen and Wan (2023) in the field of
      reference-free evaluation?
    sentences:
      - "Li et\_al. (2023c)\n\nJunlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. 2023c.\n\n\nGenerative judge for evaluating alignment.\n\n\narXiv preprint arXiv:2310.05470 (2023).\n\n\n\n\n\n\nLi et\_al. (2023a)\n\nQintong Li, Leyang Cui, Lingpeng Kong, and Wei Bi. 2023a.\n\n\nCollaborative Evaluation: Exploring the Synergy of Large Language Models and Humans for Open-ended Generation Evaluation.\n\n\narXiv preprint arXiv:2310.19740 (2023).\n\n\n\n\n\n\nLi et\_al. (2023b)\n\nRuosen Li, Teerth Patel, and Xinya Du. 2023b.\n\n\nPrd: Peer rank and discussion improve large language model based evaluations.\n\n\narXiv preprint arXiv:2307.02762 (2023).\n\n\n\n\n\n\nLi et\_al. (2017)"
      - "Springer.\n\n\n\n\n\n\nTyen et\_al. (2023)\n\nGladys Tyen, Hassan Mansoor, Peter Chen, Tony Mak, and Victor Cărbune. 2023.\n\n\nLLMs cannot find reasoning errors, but can correct them!\n\n\narXiv preprint arXiv:2311.08516 (2023).\n\n\n\n\n\n\nValmeekam et\_al. (2023)\n\nKarthik Valmeekam, Matthew Marquez, and Subbarao Kambhampati. 2023.\n\n\nCan large language models really improve by self-critiquing their own plans?\n\n\narXiv preprint arXiv:2310.08118 (2023).\n\n\n\n\n\n\nVerga et\_al. (2024)\n\nPat Verga, Sebastian Hofstatter, Sophia Althammer, Yixuan Su, Aleksandra Piktus, Arkady Arkhangorodsky, Minjie Xu, Naomi White, and Patrick Lewis. 2024."
      - "Reference-Free Evaluation\_(Shen and Wan, 2023; Zheng et\_al., 2023a; He et\_al., 2023b):"
  - source_sentence: >-
      What role do LLM judges play in the iterative refinement process described
      in the context?
    sentences:
      - "[Biases (§7.1)\n[Presentation-Related \n(§7.1.1)\n[Position bias\_(Blunch, 1984; Raghubir and Valenzuela, 2006; Ko et\_al., 2020; Wang et\_al., 2018; LLMS, 2025; Zheng et\_al., 2023a; Chen et\_al., 2024a; Wang et\_al., 2023b; Li et\_al., 2023c; Zheng et\_al., 2023b; Raina et\_al., 2024; Hou et\_al., 2024; Li et\_al., 2023d, b; Khan et\_al., 2024; Zhou et\_al., 2023a; Li et\_al., 2024a; Shi et\_al., 2024a; Stureborg et\_al., 2024; Zhao et\_al., 2024a), Verbosity bias\_(Nasrabadi, 2024; Ye et\_al., 2024b, a), leaf, text width=41em] ]\n[Social-Related (§7.1.2)"
      - "3.2.3. Feedback for Refinement\n\nAfter receiving the initial response, LLM judges provide actionable feedback to iteratively improve output quality. By analyzing the response based on specific task criteria, such as accuracy, coherence, or creativity, the LLM can identify weaknesses in the output and offer suggestions for improvement. This iterative refinement process plays a crucial role in applications that require adaptability\_(Madaan et\_al., 2024; Paul et\_al., 2023; Chen et\_al., 2023a; Xu et\_al., 2023c; Huang et\_al., 2023)."
      - "Gopalakrishnan et\_al. (2023)\n\nKarthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2023.\n\n\nTopical-chat: Towards knowledge-grounded open-domain conversations.\n\n\narXiv preprint arXiv:2308.11995 (2023).\n\n\n\n\n\n\nGuan et\_al. (2021)\n\nJian Guan, Zhexin Zhang, Zhuoer Feng, Zitao Liu, Wenbiao Ding, Xiaoxi Mao, Changjie Fan, and Minlie Huang. 2021.\n\n\nOpenMEVA: A benchmark for evaluating open-ended story generation metrics.\n\n\narXiv preprint arXiv:2105.08920 (2021).\n\n\n\n\n\n\nGuo et\_al. (2024)"
  - source_sentence: >-
      In what ways does the LLMAAA approach help mitigate the effects of noisy
      labels?
    sentences:
      - >-
        6.2. Metric


        The evaluation of LLMs-as-Judges models centers around assessing the
        extent to which the model’s judgments align with human evaluations,
        which are typically considered the benchmark for quality. Given the
        complexity and subjectivity of many evaluation tasks, achieving high
        agreement with human ratings is a key indicator of the LLM’s
        performance. To quantify this agreement, a range of statistical metrics
        is employed. Below, we outline these metrics and their applications in
        evaluating LLMs-as-Judges models.




        6.2.1. Accuracy
      - "Current LLM-as-Judge systems primarily focus on processing textual data, with limited attention to integrating other modalities like images, audio, and video. This single-modal approach falls short in complex scenarios requiring multimodal analysis, such as combining visual and textual information in medical assessments. Future systems should develop cross-modal integration capabilities to process and evaluate multimodal data simultaneously\_(Chen et\_al., 2024b). Leveraging cross-modal validation can enhance evaluation accuracy. Key research areas include efficient multimodal feature extraction, integration, and the design of unified frameworks for more comprehensive and precise evaluations."
      - "Additionally, the LLMAAA\_(Zhang et\_al., 2023a) framework incorporates an active learning strategy to efficiently select high-information samples for annotation, thereby mitigating the effects of noisy labels and reducing the reliance on costly human annotation. These approach not only enhance the performance of task-specific models but also offer new perspectives on the efficient application of LLMs in annotation workflows."
  - source_sentence: >-
      What metrics does the LLMS (2025) framework introduce to investigate
      position bias in pairwise comparisons?
    sentences:
      - "Overconfidence bias\_(Khan et\_al., 2024; Jung et\_al., 2024) in the context of LLMs-as-judges refers to the tendency of models to exhibit an inflated level of confidence in their judgments, often resulting in overly assertive evaluations that may not accurately reflect the true reliability of the answer. This bias is particularly concerning in evaluative contexts, as it can lead LLMs-as-judges to overstate the correctness of certain outputs, compromising the objectivity and dependability of assessments."
      - "Recent studies have further examined position bias in the LLMs-as-judges context.\nFor instance, a framework\_(LLMS, 2025) is proposed to investigate position bias in pairwise comparisons, introducing metrics such as repetition stability, position consistency, and preference fairness to better understand how positions affect LLM judgments.\nAnother study\_(Zheng et\_al., 2023a) explores the limitations of LLMs-as-judges, including position biases, and verifies agreement between LLM judgments and human preferences across multiple benchmarks.\nThese findings underscore the need for robust debiasing strategies to enhance the fairness and reliableness of LLMs-as-judges."
      - >-
        The search task is a fundamental component of information retrieval
        (IR), focusing on identifying the most relevant documents from extensive
        text collections based on user queries. Traditionally, relevance
        assessments in search tasks have been conducted by human annotators
        following established guidelines. However, recent advances in large
        language models (LLMs) have opened up new opportunities for utilizing
        these models as evaluators, offering an automated and scalable approach
        to relevance assessment.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.93
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.99
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.93
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.33000000000000007
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.19999999999999996
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.09999999999999998
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.93
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.99
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9704150157509183
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9603333333333333
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9603333333333333
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("chelleboyer/llm-mm-good-309e6f79-505b-4c23-8452-37cc854e67df")
# Run inference
sentences = [
    'What metrics does the LLMS (2025) framework introduce to investigate position bias in pairwise comparisons?',
    'Recent studies have further examined position bias in the LLMs-as-judges context.\nFor instance, a framework\xa0(LLMS, 2025) is proposed to investigate position bias in pairwise comparisons, introducing metrics such as repetition stability, position consistency, and preference fairness to better understand how positions affect LLM judgments.\nAnother study\xa0(Zheng et\xa0al., 2023a) explores the limitations of LLMs-as-judges, including position biases, and verifies agreement between LLM judgments and human preferences across multiple benchmarks.\nThese findings underscore the need for robust debiasing strategies to enhance the fairness and reliableness of LLMs-as-judges.',
    'Overconfidence bias\xa0(Khan et\xa0al., 2024; Jung et\xa0al., 2024) in the context of LLMs-as-judges refers to the tendency of models to exhibit an inflated level of confidence in their judgments, often resulting in overly assertive evaluations that may not accurately reflect the true reliability of the answer. This bias is particularly concerning in evaluative contexts, as it can lead LLMs-as-judges to overstate the correctness of certain outputs, compromising the objectivity and dependability of assessments.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.93
cosine_accuracy@3 0.99
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.93
cosine_precision@3 0.33
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.93
cosine_recall@3 0.99
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9704
cosine_mrr@10 0.9603
cosine_map@100 0.9603

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,334 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 5 tokens
    • mean: 23.12 tokens
    • max: 72 tokens
    • min: 3 tokens
    • mean: 132.04 tokens
    • max: 306 tokens
  • Samples:
    sentence_0 sentence_1
    What are the main components of the evaluation function ( E ) as described in the preliminaries section? LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods
















    1 Introduction

    2 PRELIMINARIES

    2.1 Evaluation Function E𝐸Eitalic_E

    2.2 Evaluation Input

    2.2.1 Evaluation Type 𝒯𝒯\mathcal{T}caligraphic_T
    2.2.2 Evaluation Criteria 𝒞𝒞\mathcal{C}caligraphic_C.
    2.2.3 Evaluation References ℛℛ\mathcal{R}caligraphic_R.


    2.3 Evaluation Output



    3 Functionality


    3.1 Performance Evaluation

    3.1.1 Responses Evaluation
    3.1.2 Model Evaluation



    3.2 Model Enhancement

    3.2.1 Reward Modeling During Training
    3.2.2 Acting as Verifier During Inference
    3.2.3 Feedback for Refinement



    3.3 Data Construction

    3.3.1 Data Annotation
    3.3.2 Data Synthesize





    4 Methodology
    How do LLMs contribute to model enhancement according to the functionalities outlined in the survey? LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods
















    1 Introduction

    2 PRELIMINARIES

    2.1 Evaluation Function E𝐸Eitalic_E

    2.2 Evaluation Input

    2.2.1 Evaluation Type 𝒯𝒯\mathcal{T}caligraphic_T
    2.2.2 Evaluation Criteria 𝒞𝒞\mathcal{C}caligraphic_C.
    2.2.3 Evaluation References ℛℛ\mathcal{R}caligraphic_R.


    2.3 Evaluation Output



    3 Functionality


    3.1 Performance Evaluation

    3.1.1 Responses Evaluation
    3.1.2 Model Evaluation



    3.2 Model Enhancement

    3.2.1 Reward Modeling During Training
    3.2.2 Acting as Verifier During Inference
    3.2.3 Feedback for Refinement



    3.3 Data Construction

    3.3.1 Data Annotation
    3.3.2 Data Synthesize





    4 Methodology
    What are the different approaches discussed under the Single-LLM System methodology? 4 Methodology


    4.1 Single-LLM System

    4.1.1 Prompt-based
    4.1.2 Tuning-based
    4.1.3 Post-processing



    4.2 Multi-LLM System

    4.2.1 Communication
    4.2.2 Aggregation


    4.3 Human-AI Collaboration System



    5 Application

    5.1 General
    5.2 Multimodal
    5.3 Medical
    5.4 Legal
    5.5 Financial
    5.6 Education
    5.7 Information Retrieval

    5.8 Others

    5.8.1 Soft Engineering
    5.8.2 Biology
    5.8.3 Social Science





    6 Meta-evaluation


    6.1 Benchmarks

    6.1.1 Code Generation
    6.1.2 Machine Translation
    6.1.3 Text Summarization
    6.1.4 Dialogue Generation
    6.1.5 Automatic Story Generation
    6.1.6 Values Alignment
    6.1.7 Recommendation
    6.1.8 Search
    6.1.9 Comprehensive Data



    6.2 Metric
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 50
  • per_device_eval_batch_size: 50
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 50
  • per_device_eval_batch_size: 50
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 27 0.9697
1.8519 50 0.9788
2.0 54 0.9775
3.0 81 0.9741
3.7037 100 0.9791
4.0 108 0.9741
5.0 135 0.9782
5.5556 150 0.9782
6.0 162 0.9782
7.0 189 0.9782
7.4074 200 0.9741
8.0 216 0.9741
9.0 243 0.9704
9.2593 250 0.9704
10.0 270 0.9704

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 2.14.4
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}