---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:400
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: What is the title of the dataset introduced by Jin, B. Dhingra,
Z. Liu, W. Cohen, and X. Lu in their 2019 publication?
sentences:
- TechQA [3]
- 'Q. Jin, B. Dhingra, Z. Liu, W. Cohen, and X. Lu.
PubMedQA: A dataset for biomedical research question answering.
In K. Inui, J. Jiang, V. Ng, and X. Wan, editors, Proceedings of the 2019 Conference
on Empirical Methods in Natural Language Processing and the 9th International
Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577,
Hong Kong, China, Nov. 2019. Association for Computational Linguistics.
doi: 10.18653/v1/D19-1259.'
- 'Our contributions address the need for standardized benchmarks and methodologies,
enabling more precise and actionable insights into the strengths and weaknesses
of different RAG systems. This, in turn, will facilitate iterative improvement
of RAG models, driving forward the capabilities of retrieval-augmented generation
in real-world applications.
References
Adlakha et al. [2023]
V. Adlakha, P. BehnamGhader, X. H. Lu, N. Meade, and S. Reddy.'
- source_sentence: What does the 2024 paper by Es et al. propose regarding the evaluation
of retrieval augmented generation?
sentences:
- 'doi: 10.18653/v1/2023.findings-acl.60.
URL https://aclanthology.org/2023.findings-acl.60.
Dinan et al. [2019]
E. Dinan, S. Roller, K. Shuster, A. Fan, M. Auli, and J. Weston.
Wizard of wikipedia: Knowledge-powered conversational agents, 2019.
Es et al. [2024]
S. Es, J. James, L. Espinosa Anke, and S. Schockaert.
RAGAs: Automated evaluation of retrieval augmented generation.'
- 'Source Domains
RAGBench comprises five distinct domains: bio-medical research (PubmedQA, CovidQA),
general knowledge (HotpotQA, MS Marco, HAGRID, ExperQA), legal contracts (CuAD),
customer support (DelucionQA, EManual, TechQA), and finance (FinBench, TAT-QA).
We select these specific domains based on availability of data, and applicability
to real-world RAG applications across different industry verticals. For detailed
descriptions of each component data source, refer to Appendix 7.2.'
- 'The overall_supported_explanation field is a string explaining why the response
*as a whole* is or is not supported by the documents. In this field, provide a
step-by-step breakdown of the claims made in the response and the support (or
lack thereof) for those claims in the documents. Begin by assessing each claim
separately, one by one; don’t make any remarks about the response as a whole
until you have assessed all the claims in isolation.'
- source_sentence: What are some common sources of questions in research or surveys?
sentences:
- 'Kwiatkowski et al. [2019]
T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein,
I. Polosukhin, M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M.-W.
Chang, A. Dai, J. Uszkoreit, Q. Le, and S. Petrov.
Natural questions: a benchmark for question answering research.
Transactions of the Association of Computational Linguistics, 2019.
Laurer et al. [2022]
M. Laurer, W. van Atteveldt, A. Casas, and K. Welbers.'
- Question Sources
- the overall RAG system performance, with the potential to provide granular, actionable
insights to the RAG practitioner.
- source_sentence: What evaluation metrics are reported for the response-level hallucination
detection task?
sentences:
- '4.3 Evaluation
Our granular annotation schema allows for various evaluation setups. For example,
we could evaluate either span-level or example/response-level predictions. For
easy comparison with existing RAG evaluation approaches that are less granular,
we report area under the receiver-operator curve (AUROC) on the response-level
hallucination detection task, and root mean squared error (RMSE) for example-level
context Relevance and Utilization predictions.'
- EManual is a question answer dataset comprising consumer electronic device manuals
and realistic questions about them composed by human annotators. The subset made
available at the time of writing amounts to 659 unique questions about the Samsung
Smart TV/remote and the accompanying user manual, segmented into 261 chunks. To
form a RAG dataset, we embed the manual segments into a vector database with OpenAI
embedding and retrieve up to 3 context documents per question from it. For each
- 'Table 3: Benchmark evaluation on test splits. Reporting AUROC for predicting
hallucinated responses (Hal), RMSE for predicting Context Relevance (Rel) and
utilization (Util). ∗ indicates statistical significance at 95% confidence intervals,
measured by bootstrap comparing the top and second-best results. RAGAS and Trulens
do not evaluate Utilization.
GPT-3.5
RAGAS
TruLens
DeBERTA
Dataset
Hal↑↑\uparrow↑
Rel↓↓\downarrow↓
Util↓↓\downarrow↓
Hal↑↑\uparrow↑
Rel↓↓\downarrow↓'
- source_sentence: What is the main contribution of Kwiatkowski et al. [2019] in the
field of question answering research?
sentences:
- 'The sentence_support_information field is a list of objects, one for each sentence
in the response. Each object MUST have the following fields:
- response_sentence_key: a string identifying the sentence in the response.
This key is the same as the one used in the response above.
- explanation: a string explaining why the sentence is or is not supported by
the
documents.
- supporting_sentence_keys: keys (e.g. ’0a’) of sentences from the documents that'
- 'Kwiatkowski et al. [2019]
T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein,
I. Polosukhin, M. Kelcey, J. Devlin, K. Lee, K. N. Toutanova, L. Jones, M.-W.
Chang, A. Dai, J. Uszkoreit, Q. Le, and S. Petrov.
Natural questions: a benchmark for question answering research.
Transactions of the Association of Computational Linguistics, 2019.
Laurer et al. [2022]
M. Laurer, W. van Atteveldt, A. Casas, and K. Welbers.'
- 'with consistent annotations. To best represent real-world RAG scenarios, we vary
a number parameters to construct the benchmark: the source domain, number of context
documents, context token length, and the response generator model Figure 1 illustrates
where these variable parameters fall in the RAG pipeline.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.8571428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9642857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8571428571428571
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.32142857142857145
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8571428571428571
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9642857142857143
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9385586452838898
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9178571428571428
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9178571428571428
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l)
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("chelleboyer/llm-evals-2-79b954ef-4798-4994-be72-a88d46b8ecca")
# Run inference
sentences = [
'What is the main contribution of Kwiatkowski et al. [2019] in the field of question answering research?',
'Kwiatkowski et\xa0al. [2019]\n\nT.\xa0Kwiatkowski, J.\xa0Palomaki, O.\xa0Redfield, M.\xa0Collins, A.\xa0Parikh, C.\xa0Alberti, D.\xa0Epstein, I.\xa0Polosukhin, M.\xa0Kelcey, J.\xa0Devlin, K.\xa0Lee, K.\xa0N. Toutanova, L.\xa0Jones, M.-W. Chang, A.\xa0Dai, J.\xa0Uszkoreit, Q.\xa0Le, and S.\xa0Petrov.\n\n\nNatural questions: a benchmark for question answering research.\n\n\nTransactions of the Association of Computational Linguistics, 2019.\n\n\n\n\nLaurer et\xa0al. [2022]\n\nM.\xa0Laurer, W.\xa0van Atteveldt, A.\xa0Casas, and K.\xa0Welbers.',
'The sentence_support_information field is a list of objects, one for each sentence\nin the response. Each object MUST have the following fields:\n- response_sentence_key: a string identifying the sentence in the response.\nThis key is the same as the one used in the response above.\n- explanation: a string explaining why the sentence is or is not supported by the\ndocuments.\n- supporting_sentence_keys: keys (e.g. ’0a’) of sentences from the documents that',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [InformationRetrievalEvaluator
](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.8571 |
| cosine_accuracy@3 | 0.9643 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.8571 |
| cosine_precision@3 | 0.3214 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.8571 |
| cosine_recall@3 | 0.9643 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9386** |
| cosine_mrr@10 | 0.9179 |
| cosine_map@100 | 0.9179 |
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 400 training samples
* Columns: sentence_0
and sentence_1
* Approximate statistics based on the first 400 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details |
What are the key components and criteria used in the TRACe Evaluation Framework within RAGBench?
| RAGBench: Explainable Benchmark for Retrieval-Augmented Generation Systems
1 Introduction
2 Related Work
RAG evaluation
Finetuned RAG evaluation models
3 RAGBench Construction
3.1 Component Datasets
Source Domains
Context Token Length
Task Types
Question Sources
Response Generation
Data Splits
3.2 TRACe Evaluation Framework
Definitions
Context Relevance
Context Utilization
Completeness
Adherence
3.3 RAGBench Statistics
3.4 LLM annotator
|
| How does RAGBench utilize component datasets to construct a benchmark for Retrieval-Augmented Generation systems?
| RAGBench: Explainable Benchmark for Retrieval-Augmented Generation Systems
1 Introduction
2 Related Work
RAG evaluation
Finetuned RAG evaluation models
3 RAGBench Construction
3.1 Component Datasets
Source Domains
Context Token Length
Task Types
Question Sources
Response Generation
Data Splits
3.2 TRACe Evaluation Framework
Definitions
Context Relevance
Context Utilization
Completeness
Adherence
3.3 RAGBench Statistics
3.4 LLM annotator
|
| What are the key components and findings discussed in the RAGBench Statistics and Case Study sections?
| 3.3 RAGBench Statistics
3.4 LLM annotator
Alignment with Human Judgements
3.5 RAG Case Study
4 Experiments
4.1 LLM Judge
4.2 Fine-tuned Judge
4.3 Evaluation
5 Results
Estimating Context Relevance is Difficult
6 Conclusion
7 Appendix
7.1 RAGBench Code and Data
7.2 RAGBench Dataset Details
PubMedQA [14]
CovidQA-RAG
HotpotQA [42]
MS Marco [28]
CUAD [12]
DelucionQA [33]
EManual [27]
TechQA [3]
FinQA [6]
TAT-QA [47]
HAGRID [15]
ExpertQA [25]
|
* Loss: [MatryoshkaLoss
](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 5
- `per_device_eval_batch_size`: 5
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters