metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1334
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: >-
What are the key contributions of Shen and Wan (2023) in the field of
reference-free evaluation?
sentences:
- may be constrained by the quality and variety of the reference data.
- "Springer.\n\n\n\n\n\n\nTyen et\_al. (2023)\n\nGladys Tyen, Hassan Mansoor, Peter Chen, Tony Mak, and Victor Cărbune. 2023.\n\n\nLLMs cannot find reasoning errors, but can correct them!\n\n\narXiv preprint arXiv:2311.08516 (2023).\n\n\n\n\n\n\nValmeekam et\_al. (2023)\n\nKarthik Valmeekam, Matthew Marquez, and Subbarao Kambhampati. 2023.\n\n\nCan large language models really improve by self-critiquing their own plans?\n\n\narXiv preprint arXiv:2310.08118 (2023).\n\n\n\n\n\n\nVerga et\_al. (2024)\n\nPat Verga, Sebastian Hofstatter, Sophia Althammer, Yixuan Su, Aleksandra Piktus, Arkady Arkhangorodsky, Minjie Xu, Naomi White, and Patrick Lewis. 2024."
- "Reference-Free Evaluation\_(Shen and Wan, 2023; Zheng et\_al., 2023a; He et\_al., 2023b):"
- source_sentence: >-
What role do LLM judges play in the iterative refinement process described
in the context?
sentences:
- "Gopalakrishnan et\_al. (2023)\n\nKarthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2023.\n\n\nTopical-chat: Towards knowledge-grounded open-domain conversations.\n\n\narXiv preprint arXiv:2308.11995 (2023).\n\n\n\n\n\n\nGuan et\_al. (2021)\n\nJian Guan, Zhexin Zhang, Zhuoer Feng, Zitao Liu, Wenbiao Ding, Xiaoxi Mao, Changjie Fan, and Minlie Huang. 2021.\n\n\nOpenMEVA: A benchmark for evaluating open-ended story generation metrics.\n\n\narXiv preprint arXiv:2105.08920 (2021).\n\n\n\n\n\n\nGuo et\_al. (2024)"
- "Li et\_al. (2023c)\n\nJunlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. 2023c.\n\n\nGenerative judge for evaluating alignment.\n\n\narXiv preprint arXiv:2310.05470 (2023).\n\n\n\n\n\n\nLi et\_al. (2023a)\n\nQintong Li, Leyang Cui, Lingpeng Kong, and Wei Bi. 2023a.\n\n\nCollaborative Evaluation: Exploring the Synergy of Large Language Models and Humans for Open-ended Generation Evaluation.\n\n\narXiv preprint arXiv:2310.19740 (2023).\n\n\n\n\n\n\nLi et\_al. (2023b)\n\nRuosen Li, Teerth Patel, and Xinya Du. 2023b.\n\n\nPrd: Peer rank and discussion improve large language model based evaluations.\n\n\narXiv preprint arXiv:2307.02762 (2023).\n\n\n\n\n\n\nLi et\_al. (2017)"
- "3.2.3. Feedback for Refinement\n\nAfter receiving the initial response, LLM judges provide actionable feedback to iteratively improve output quality. By analyzing the response based on specific task criteria, such as accuracy, coherence, or creativity, the LLM can identify weaknesses in the output and offer suggestions for improvement. This iterative refinement process plays a crucial role in applications that require adaptability\_(Madaan et\_al., 2024; Paul et\_al., 2023; Chen et\_al., 2023a; Xu et\_al., 2023c; Huang et\_al., 2023)."
- source_sentence: >-
What metrics does the LLMS (2025) framework introduce to investigate
position bias in pairwise comparisons?
sentences:
- >-
6.2. Metric
The evaluation of LLMs-as-Judges models centers around assessing the
extent to which the model’s judgments align with human evaluations,
which are typically considered the benchmark for quality. Given the
complexity and subjectivity of many evaluation tasks, achieving high
agreement with human ratings is a key indicator of the LLM’s
performance. To quantify this agreement, a range of statistical metrics
is employed. Below, we outline these metrics and their applications in
evaluating LLMs-as-Judges models.
6.2.1. Accuracy
- "Recent studies have further examined position bias in the LLMs-as-judges context.\nFor instance, a framework\_(LLMS, 2025) is proposed to investigate position bias in pairwise comparisons, introducing metrics such as repetition stability, position consistency, and preference fairness to better understand how positions affect LLM judgments.\nAnother study\_(Zheng et\_al., 2023a) explores the limitations of LLMs-as-judges, including position biases, and verifies agreement between LLM judgments and human preferences across multiple benchmarks.\nThese findings underscore the need for robust debiasing strategies to enhance the fairness and reliableness of LLMs-as-judges."
- "Current LLM-as-Judge systems primarily focus on processing textual data, with limited attention to integrating other modalities like images, audio, and video. This single-modal approach falls short in complex scenarios requiring multimodal analysis, such as combining visual and textual information in medical assessments. Future systems should develop cross-modal integration capabilities to process and evaluate multimodal data simultaneously\_(Chen et\_al., 2024b). Leveraging cross-modal validation can enhance evaluation accuracy. Key research areas include efficient multimodal feature extraction, integration, and the design of unified frameworks for more comprehensive and precise evaluations."
- source_sentence: >-
How does the work by Jiefeng Chen et al. (2023c) propose to improve
selective prediction in large language models?
sentences:
- "Overconfidence bias\_(Khan et\_al., 2024; Jung et\_al., 2024) in the context of LLMs-as-judges refers to the tendency of models to exhibit an inflated level of confidence in their judgments, often resulting in overly assertive evaluations that may not accurately reflect the true reliability of the answer. This bias is particularly concerning in evaluative contexts, as it can lead LLMs-as-judges to overstate the correctness of certain outputs, compromising the objectivity and dependability of assessments."
- "[Biases (§7.1)\n[Presentation-Related \n(§7.1.1)\n[Position bias\_(Blunch, 1984; Raghubir and Valenzuela, 2006; Ko et\_al., 2020; Wang et\_al., 2018; LLMS, 2025; Zheng et\_al., 2023a; Chen et\_al., 2024a; Wang et\_al., 2023b; Li et\_al., 2023c; Zheng et\_al., 2023b; Raina et\_al., 2024; Hou et\_al., 2024; Li et\_al., 2023d, b; Khan et\_al., 2024; Zhou et\_al., 2023a; Li et\_al., 2024a; Shi et\_al., 2024a; Stureborg et\_al., 2024; Zhao et\_al., 2024a), Verbosity bias\_(Nasrabadi, 2024; Ye et\_al., 2024b, a), leaf, text width=41em] ]\n[Social-Related (§7.1.2)"
- "Journal of Natural Language Processing 30, 1 (2023), 243–249.\n\n\n\n\n\n\nChen et\_al. (2024e)\n\nJunjie Chen, Weihang Su, Zhumin Chu, Haitao Li, Qinyao Ai, Yiqun Liu, Min Zhang, and Shaoping Ma. 2024e.\n\n\nAn Automatic and Cost-Efficient Peer-Review Framework for Language Generation Evaluation.\n\n\n\n\narXiv:2410.12265\_[cs.CL]\n\nhttps://arxiv.org/abs/2410.12265\n\n\n\nChen et\_al. (2023c)\n\nJiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan\_O Arik, Tomas Pfister, and Somesh Jha. 2023c.\n\n\nAdaptation with self-evaluation to improve selective prediction in llms.\n\n\narXiv preprint arXiv:2310.11689 (2023).\n\n\n\n\n\n\nChen et\_al. (2024d)"
- source_sentence: >-
How do Dong et al. (2022) contribute to the understanding of in-context
learning in their survey?
sentences:
- "Additionally, the LLMAAA\_(Zhang et\_al., 2023a) framework incorporates an active learning strategy to efficiently select high-information samples for annotation, thereby mitigating the effects of noisy labels and reducing the reliance on costly human annotation. These approach not only enhance the performance of task-specific models but also offer new perspectives on the efficient application of LLMs in annotation workflows."
- "Dong et\_al. (2024a)\n\nQingxiu Dong, Li Dong, Xingxing Zhang, Zhifang Sui, and Furu Wei. 2024a.\n\n\nSelf-Boosting Large Language Models with Synthetic Preference Data.\n\n\narXiv preprint arXiv:2410.06961 (2024).\n\n\n\n\n\n\nDong et\_al. (2022)\n\nQingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Tianyu Liu, et\_al. 2022.\n\n\nA survey on in-context learning.\n\n\narXiv preprint arXiv:2301.00234 (2022).\n\n\n\n\n\n\nDong et\_al. (2024b)\n\nYijiang\_River Dong, Tiancheng Hu, and Nigel Collier. 2024b.\n\n\nCan LLM be a Personalized Judge?\n\n\narXiv preprint arXiv:2406.11657 (2024).\n\n\n\n\n\n\nDorner et\_al. (2024)\n\nFlorian\_E. Dorner, Vivian\_Y. Nastl, and Moritz Hardt. 2024."
- >-
The search task is a fundamental component of information retrieval
(IR), focusing on identifying the most relevant documents from extensive
text collections based on user queries. Traditionally, relevance
assessments in search tasks have been conducted by human annotators
following established guidelines. However, recent advances in large
language models (LLMs) have opened up new opportunities for utilizing
these models as evaluators, offering an automated and scalable approach
to relevance assessment.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.92
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.99
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.92
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33000000000000007
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.92
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.99
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9667243132866329
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9553333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9553333333333334
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("chelleboyer/llm-mm-good-eb8e3f60-56f2-4729-8934-2428ca568d27")
# Run inference
sentences = [
'How do Dong et al. (2022) contribute to the understanding of in-context learning in their survey?',
'Dong et\xa0al. (2024a)\n\nQingxiu Dong, Li Dong, Xingxing Zhang, Zhifang Sui, and Furu Wei. 2024a.\n\n\nSelf-Boosting Large Language Models with Synthetic Preference Data.\n\n\narXiv preprint arXiv:2410.06961 (2024).\n\n\n\n\n\n\nDong et\xa0al. (2022)\n\nQingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing Xu, Zhiyong Wu, Tianyu Liu, et\xa0al. 2022.\n\n\nA survey on in-context learning.\n\n\narXiv preprint arXiv:2301.00234 (2022).\n\n\n\n\n\n\nDong et\xa0al. (2024b)\n\nYijiang\xa0River Dong, Tiancheng Hu, and Nigel Collier. 2024b.\n\n\nCan LLM be a Personalized Judge?\n\n\narXiv preprint arXiv:2406.11657 (2024).\n\n\n\n\n\n\nDorner et\xa0al. (2024)\n\nFlorian\xa0E. Dorner, Vivian\xa0Y. Nastl, and Moritz Hardt. 2024.',
'Additionally, the LLMAAA\xa0(Zhang et\xa0al., 2023a) framework incorporates an active learning strategy to efficiently select high-information samples for annotation, thereby mitigating the effects of noisy labels and reducing the reliance on costly human annotation. These approach not only enhance the performance of task-specific models but also offer new perspectives on the efficient application of LLMs in annotation workflows.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.92 |
cosine_accuracy@3 | 0.99 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.92 |
cosine_precision@3 | 0.33 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.92 |
cosine_recall@3 | 0.99 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9667 |
cosine_mrr@10 | 0.9553 |
cosine_map@100 | 0.9553 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 1,334 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 5 tokens
- mean: 23.14 tokens
- max: 69 tokens
- min: 3 tokens
- mean: 132.04 tokens
- max: 306 tokens
- Samples:
sentence_0 sentence_1 What are the key components of the evaluation function ( E ) as described in the preliminaries section?
LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods
1 Introduction
2 PRELIMINARIES
2.1 Evaluation Function E𝐸Eitalic_E
2.2 Evaluation Input
2.2.1 Evaluation Type 𝒯𝒯\mathcal{T}caligraphic_T
2.2.2 Evaluation Criteria 𝒞𝒞\mathcal{C}caligraphic_C.
2.2.3 Evaluation References ℛℛ\mathcal{R}caligraphic_R.
2.3 Evaluation Output
3 Functionality
3.1 Performance Evaluation
3.1.1 Responses Evaluation
3.1.2 Model Evaluation
3.2 Model Enhancement
3.2.1 Reward Modeling During Training
3.2.2 Acting as Verifier During Inference
3.2.3 Feedback for Refinement
3.3 Data Construction
3.3.1 Data Annotation
3.3.2 Data Synthesize
4 MethodologyHow do LLMs contribute to model enhancement according to the functionalities outlined in the survey?
LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods
1 Introduction
2 PRELIMINARIES
2.1 Evaluation Function E𝐸Eitalic_E
2.2 Evaluation Input
2.2.1 Evaluation Type 𝒯𝒯\mathcal{T}caligraphic_T
2.2.2 Evaluation Criteria 𝒞𝒞\mathcal{C}caligraphic_C.
2.2.3 Evaluation References ℛℛ\mathcal{R}caligraphic_R.
2.3 Evaluation Output
3 Functionality
3.1 Performance Evaluation
3.1.1 Responses Evaluation
3.1.2 Model Evaluation
3.2 Model Enhancement
3.2.1 Reward Modeling During Training
3.2.2 Acting as Verifier During Inference
3.2.3 Feedback for Refinement
3.3 Data Construction
3.3.1 Data Annotation
3.3.2 Data Synthesize
4 MethodologyWhat are the different approaches discussed under the Single-LLM System methodology?
4 Methodology
4.1 Single-LLM System
4.1.1 Prompt-based
4.1.2 Tuning-based
4.1.3 Post-processing
4.2 Multi-LLM System
4.2.1 Communication
4.2.2 Aggregation
4.3 Human-AI Collaboration System
5 Application
5.1 General
5.2 Multimodal
5.3 Medical
5.4 Legal
5.5 Financial
5.6 Education
5.7 Information Retrieval
5.8 Others
5.8.1 Soft Engineering
5.8.2 Biology
5.8.3 Social Science
6 Meta-evaluation
6.1 Benchmarks
6.1.1 Code Generation
6.1.2 Machine Translation
6.1.3 Text Summarization
6.1.4 Dialogue Generation
6.1.5 Automatic Story Generation
6.1.6 Values Alignment
6.1.7 Recommendation
6.1.8 Search
6.1.9 Comprehensive Data
6.2 Metric - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 50per_device_eval_batch_size
: 50num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 50per_device_eval_batch_size
: 50per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 27 | 0.9647 |
1.8519 | 50 | 0.9685 |
2.0 | 54 | 0.9717 |
3.0 | 81 | 0.9717 |
3.7037 | 100 | 0.9778 |
4.0 | 108 | 0.9754 |
5.0 | 135 | 0.9699 |
5.5556 | 150 | 0.9699 |
6.0 | 162 | 0.9664 |
7.0 | 189 | 0.9630 |
7.4074 | 200 | 0.9667 |
8.0 | 216 | 0.9667 |
9.0 | 243 | 0.9667 |
9.2593 | 250 | 0.9667 |
10.0 | 270 | 0.9667 |
Framework Versions
- Python: 3.11.12
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}