---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1334
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: How can the quality of reference data constrain outcomes?
sentences:
- 'Dong et al. (2024a)
Qingxiu Dong, Li Dong, Xingxing Zhang, Zhifang Sui, and Furu Wei. 2024a.
Self-Boosting Large Language Models with Synthetic Preference Data.
arXiv preprint arXiv:2410.06961 (2024).
Dong et al. (2022)
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing
Xu, Zhiyong Wu, Tianyu Liu, et al. 2022.
A survey on in-context learning.
arXiv preprint arXiv:2301.00234 (2022).
Dong et al. (2024b)
Yijiang River Dong, Tiancheng Hu, and Nigel Collier. 2024b.
Can LLM be a Personalized Judge?
arXiv preprint arXiv:2406.11657 (2024).
Dorner et al. (2024)
Florian E. Dorner, Vivian Y. Nastl, and Moritz Hardt. 2024.'
- 'Journal of Natural Language Processing 30, 1 (2023), 243–249.
Chen et al. (2024e)
Junjie Chen, Weihang Su, Zhumin Chu, Haitao Li, Qinyao Ai, Yiqun Liu, Min Zhang,
and Shaoping Ma. 2024e.
An Automatic and Cost-Efficient Peer-Review Framework for Language Generation
Evaluation.
arXiv:2410.12265 [cs.CL]
https://arxiv.org/abs/2410.12265
Chen et al. (2023c)
Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan O Arik, Tomas Pfister, and
Somesh Jha. 2023c.
Adaptation with self-evaluation to improve selective prediction in llms.
arXiv preprint arXiv:2310.11689 (2023).
Chen et al. (2024d)'
- may be constrained by the quality and variety of the reference data.
- source_sentence: What are the key contributions of Shen and Wan (2023) in the field
of reference-free evaluation?
sentences:
- 'Li et al. (2023c)
Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. 2023c.
Generative judge for evaluating alignment.
arXiv preprint arXiv:2310.05470 (2023).
Li et al. (2023a)
Qintong Li, Leyang Cui, Lingpeng Kong, and Wei Bi. 2023a.
Collaborative Evaluation: Exploring the Synergy of Large Language Models and Humans
for Open-ended Generation Evaluation.
arXiv preprint arXiv:2310.19740 (2023).
Li et al. (2023b)
Ruosen Li, Teerth Patel, and Xinya Du. 2023b.
Prd: Peer rank and discussion improve large language model based evaluations.
arXiv preprint arXiv:2307.02762 (2023).
Li et al. (2017)'
- 'Springer.
Tyen et al. (2023)
Gladys Tyen, Hassan Mansoor, Peter Chen, Tony Mak, and Victor Cărbune. 2023.
LLMs cannot find reasoning errors, but can correct them!
arXiv preprint arXiv:2311.08516 (2023).
Valmeekam et al. (2023)
Karthik Valmeekam, Matthew Marquez, and Subbarao Kambhampati. 2023.
Can large language models really improve by self-critiquing their own plans?
arXiv preprint arXiv:2310.08118 (2023).
Verga et al. (2024)
Pat Verga, Sebastian Hofstatter, Sophia Althammer, Yixuan Su, Aleksandra Piktus,
Arkady Arkhangorodsky, Minjie Xu, Naomi White, and Patrick Lewis. 2024.'
- 'Reference-Free Evaluation (Shen and Wan, 2023; Zheng et al., 2023a; He et al.,
2023b):'
- source_sentence: What role do LLM judges play in the iterative refinement process
described in the context?
sentences:
- "[Biases (§7.1)\n[Presentation-Related \n(§7.1.1)\n[Position bias (Blunch, 1984;\
\ Raghubir and Valenzuela, 2006; Ko et al., 2020; Wang et al., 2018; LLMS, 2025;\
\ Zheng et al., 2023a; Chen et al., 2024a; Wang et al., 2023b; Li et al., 2023c;\
\ Zheng et al., 2023b; Raina et al., 2024; Hou et al., 2024; Li et al., 2023d,\
\ b; Khan et al., 2024; Zhou et al., 2023a; Li et al., 2024a; Shi et al., 2024a;\
\ Stureborg et al., 2024; Zhao et al., 2024a), Verbosity bias (Nasrabadi, 2024;\
\ Ye et al., 2024b, a), leaf, text width=41em] ]\n[Social-Related (§7.1.2)"
- '3.2.3. Feedback for Refinement
After receiving the initial response, LLM judges provide actionable feedback to
iteratively improve output quality. By analyzing the response based on specific
task criteria, such as accuracy, coherence, or creativity, the LLM can identify
weaknesses in the output and offer suggestions for improvement. This iterative
refinement process plays a crucial role in applications that require adaptability (Madaan
et al., 2024; Paul et al., 2023; Chen et al., 2023a; Xu et al., 2023c; Huang et al.,
2023).'
- 'Gopalakrishnan et al. (2023)
Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev
Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2023.
Topical-chat: Towards knowledge-grounded open-domain conversations.
arXiv preprint arXiv:2308.11995 (2023).
Guan et al. (2021)
Jian Guan, Zhexin Zhang, Zhuoer Feng, Zitao Liu, Wenbiao Ding, Xiaoxi Mao, Changjie
Fan, and Minlie Huang. 2021.
OpenMEVA: A benchmark for evaluating open-ended story generation metrics.
arXiv preprint arXiv:2105.08920 (2021).
Guo et al. (2024)'
- source_sentence: In what ways does the LLMAAA approach help mitigate the effects
of noisy labels?
sentences:
- '6.2. Metric
The evaluation of LLMs-as-Judges models centers around assessing the extent to
which the model’s judgments align with human evaluations, which are typically
considered the benchmark for quality. Given the complexity and subjectivity of
many evaluation tasks, achieving high agreement with human ratings is a key indicator
of the LLM’s performance. To quantify this agreement, a range of statistical metrics
is employed. Below, we outline these metrics and their applications in evaluating
LLMs-as-Judges models.
6.2.1. Accuracy'
- Current LLM-as-Judge systems primarily focus on processing textual data, with
limited attention to integrating other modalities like images, audio, and video.
This single-modal approach falls short in complex scenarios requiring multimodal
analysis, such as combining visual and textual information in medical assessments.
Future systems should develop cross-modal integration capabilities to process
and evaluate multimodal data simultaneously (Chen et al., 2024b). Leveraging cross-modal
validation can enhance evaluation accuracy. Key research areas include efficient
multimodal feature extraction, integration, and the design of unified frameworks
for more comprehensive and precise evaluations.
- Additionally, the LLMAAA (Zhang et al., 2023a) framework incorporates an active
learning strategy to efficiently select high-information samples for annotation,
thereby mitigating the effects of noisy labels and reducing the reliance on costly
human annotation. These approach not only enhance the performance of task-specific
models but also offer new perspectives on the efficient application of LLMs in
annotation workflows.
- source_sentence: What metrics does the LLMS (2025) framework introduce to investigate
position bias in pairwise comparisons?
sentences:
- Overconfidence bias (Khan et al., 2024; Jung et al., 2024) in the context of LLMs-as-judges
refers to the tendency of models to exhibit an inflated level of confidence in
their judgments, often resulting in overly assertive evaluations that may not
accurately reflect the true reliability of the answer. This bias is particularly
concerning in evaluative contexts, as it can lead LLMs-as-judges to overstate
the correctness of certain outputs, compromising the objectivity and dependability
of assessments.
- 'Recent studies have further examined position bias in the LLMs-as-judges context.
For instance, a framework (LLMS, 2025) is proposed to investigate position bias
in pairwise comparisons, introducing metrics such as repetition stability, position
consistency, and preference fairness to better understand how positions affect
LLM judgments.
Another study (Zheng et al., 2023a) explores the limitations of LLMs-as-judges,
including position biases, and verifies agreement between LLM judgments and human
preferences across multiple benchmarks.
These findings underscore the need for robust debiasing strategies to enhance
the fairness and reliableness of LLMs-as-judges.'
- The search task is a fundamental component of information retrieval (IR), focusing
on identifying the most relevant documents from extensive text collections based
on user queries. Traditionally, relevance assessments in search tasks have been
conducted by human annotators following established guidelines. However, recent
advances in large language models (LLMs) have opened up new opportunities for
utilizing these models as evaluators, offering an automated and scalable approach
to relevance assessment.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.93
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.99
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.93
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33000000000000007
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999996
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.93
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.99
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9704150157509183
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9603333333333333
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9603333333333333
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("chelleboyer/llm-mm-good-309e6f79-505b-4c23-8452-37cc854e67df")
# Run inference
sentences = [
'What metrics does the LLMS (2025) framework introduce to investigate position bias in pairwise comparisons?',
'Recent studies have further examined position bias in the LLMs-as-judges context.\nFor instance, a framework\xa0(LLMS, 2025) is proposed to investigate position bias in pairwise comparisons, introducing metrics such as repetition stability, position consistency, and preference fairness to better understand how positions affect LLM judgments.\nAnother study\xa0(Zheng et\xa0al., 2023a) explores the limitations of LLMs-as-judges, including position biases, and verifies agreement between LLM judgments and human preferences across multiple benchmarks.\nThese findings underscore the need for robust debiasing strategies to enhance the fairness and reliableness of LLMs-as-judges.',
'Overconfidence bias\xa0(Khan et\xa0al., 2024; Jung et\xa0al., 2024) in the context of LLMs-as-judges refers to the tendency of models to exhibit an inflated level of confidence in their judgments, often resulting in overly assertive evaluations that may not accurately reflect the true reliability of the answer. This bias is particularly concerning in evaluative contexts, as it can lead LLMs-as-judges to overstate the correctness of certain outputs, compromising the objectivity and dependability of assessments.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [InformationRetrievalEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.93 |
| cosine_accuracy@3 | 0.99 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.93 |
| cosine_precision@3 | 0.33 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.93 |
| cosine_recall@3 | 0.99 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9704** |
| cosine_mrr@10 | 0.9603 |
| cosine_map@100 | 0.9603 |
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,334 training samples
* Columns: sentence_0
and sentence_1
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details |
What are the main components of the evaluation function \( E \) as described in the preliminaries section?
| LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods
1 Introduction
2 PRELIMINARIES
2.1 Evaluation Function E𝐸Eitalic_E
2.2 Evaluation Input
2.2.1 Evaluation Type 𝒯𝒯\mathcal{T}caligraphic_T
2.2.2 Evaluation Criteria 𝒞𝒞\mathcal{C}caligraphic_C.
2.2.3 Evaluation References ℛℛ\mathcal{R}caligraphic_R.
2.3 Evaluation Output
3 Functionality
3.1 Performance Evaluation
3.1.1 Responses Evaluation
3.1.2 Model Evaluation
3.2 Model Enhancement
3.2.1 Reward Modeling During Training
3.2.2 Acting as Verifier During Inference
3.2.3 Feedback for Refinement
3.3 Data Construction
3.3.1 Data Annotation
3.3.2 Data Synthesize
4 Methodology
|
| How do LLMs contribute to model enhancement according to the functionalities outlined in the survey?
| LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods
1 Introduction
2 PRELIMINARIES
2.1 Evaluation Function E𝐸Eitalic_E
2.2 Evaluation Input
2.2.1 Evaluation Type 𝒯𝒯\mathcal{T}caligraphic_T
2.2.2 Evaluation Criteria 𝒞𝒞\mathcal{C}caligraphic_C.
2.2.3 Evaluation References ℛℛ\mathcal{R}caligraphic_R.
2.3 Evaluation Output
3 Functionality
3.1 Performance Evaluation
3.1.1 Responses Evaluation
3.1.2 Model Evaluation
3.2 Model Enhancement
3.2.1 Reward Modeling During Training
3.2.2 Acting as Verifier During Inference
3.2.3 Feedback for Refinement
3.3 Data Construction
3.3.1 Data Annotation
3.3.2 Data Synthesize
4 Methodology
|
| What are the different approaches discussed under the Single-LLM System methodology?
| 4 Methodology
4.1 Single-LLM System
4.1.1 Prompt-based
4.1.2 Tuning-based
4.1.3 Post-processing
4.2 Multi-LLM System
4.2.1 Communication
4.2.2 Aggregation
4.3 Human-AI Collaboration System
5 Application
5.1 General
5.2 Multimodal
5.3 Medical
5.4 Legal
5.5 Financial
5.6 Education
5.7 Information Retrieval
5.8 Others
5.8.1 Soft Engineering
5.8.2 Biology
5.8.3 Social Science
6 Meta-evaluation
6.1 Benchmarks
6.1.1 Code Generation
6.1.2 Machine Translation
6.1.3 Text Summarization
6.1.4 Dialogue Generation
6.1.5 Automatic Story Generation
6.1.6 Values Alignment
6.1.7 Recommendation
6.1.8 Search
6.1.9 Comprehensive Data
6.2 Metric
|
* Loss: [MatryoshkaLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 50
- `per_device_eval_batch_size`: 50
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters