metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:400
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: >-
What is the title of the dataset introduced by Jin, B. Dhingra, Z. Liu, W.
Cohen, and X. Lu in their 2019 publication?
sentences:
- TechQA [3]
- "Q.\_Jin, B.\_Dhingra, Z.\_Liu, W.\_Cohen, and X.\_Lu.\n\n\nPubMedQA: A dataset for biomedical research question answering.\n\n\nIn K.\_Inui, J.\_Jiang, V.\_Ng, and X.\_Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567–2577, Hong Kong, China, Nov. 2019. Association for Computational Linguistics.\n\n\ndoi: 10.18653/v1/D19-1259."
- "Our contributions address the need for standardized benchmarks and methodologies, enabling more precise and actionable insights into the strengths and weaknesses of different RAG systems. This, in turn, will facilitate iterative improvement of RAG models, driving forward the capabilities of retrieval-augmented generation in real-world applications.\n\n\n\nReferences\n\n\nAdlakha et\_al. [2023]\n\nV.\_Adlakha, P.\_BehnamGhader, X.\_H. Lu, N.\_Meade, and S.\_Reddy."
- source_sentence: >-
What does the 2024 paper by Es et al. propose regarding the evaluation of
retrieval augmented generation?
sentences:
- "doi: 10.18653/v1/2023.findings-acl.60.\n\n\nURL https://aclanthology.org/2023.findings-acl.60.\n\n\n\n\nDinan et\_al. [2019]\n\nE.\_Dinan, S.\_Roller, K.\_Shuster, A.\_Fan, M.\_Auli, and J.\_Weston.\n\n\nWizard of wikipedia: Knowledge-powered conversational agents, 2019.\n\n\n\n\nEs et\_al. [2024]\n\nS.\_Es, J.\_James, L.\_Espinosa\_Anke, and S.\_Schockaert.\n\n\nRAGAs: Automated evaluation of retrieval augmented generation."
- >-
Source Domains
RAGBench comprises five distinct domains: bio-medical research
(PubmedQA, CovidQA), general knowledge (HotpotQA, MS Marco, HAGRID,
ExperQA), legal contracts (CuAD), customer support (DelucionQA, EManual,
TechQA), and finance (FinBench, TAT-QA). We select these specific
domains based on availability of data, and applicability to real-world
RAG applications across different industry verticals. For detailed
descriptions of each component data source, refer to Appendix 7.2.
- >-
The overall_supported_explanation field is a string explaining why the
response
*as a whole* is or is not supported by the documents. In this field,
provide a
step-by-step breakdown of the claims made in the response and the
support (or
lack thereof) for those claims in the documents. Begin by assessing each
claim
separately, one by one; don’t make any remarks about the response as a
whole
until you have assessed all the claims in isolation.
- source_sentence: What are some common sources of questions in research or surveys?
sentences:
- "Kwiatkowski et\_al. [2019]\n\nT.\_Kwiatkowski, J.\_Palomaki, O.\_Redfield, M.\_Collins, A.\_Parikh, C.\_Alberti, D.\_Epstein, I.\_Polosukhin, M.\_Kelcey, J.\_Devlin, K.\_Lee, K.\_N. Toutanova, L.\_Jones, M.-W. Chang, A.\_Dai, J.\_Uszkoreit, Q.\_Le, and S.\_Petrov.\n\n\nNatural questions: a benchmark for question answering research.\n\n\nTransactions of the Association of Computational Linguistics, 2019.\n\n\n\n\nLaurer et\_al. [2022]\n\nM.\_Laurer, W.\_van Atteveldt, A.\_Casas, and K.\_Welbers."
- Question Sources
- >-
the overall RAG system performance, with the potential to provide
granular, actionable insights to the RAG practitioner.
- source_sentence: >-
What evaluation metrics are reported for the response-level hallucination
detection task?
sentences:
- >-
4.3 Evaluation
Our granular annotation schema allows for various evaluation setups. For
example, we could evaluate either span-level or example/response-level
predictions. For easy comparison with existing RAG evaluation approaches
that are less granular, we report area under the receiver-operator curve
(AUROC) on the response-level hallucination detection task, and root
mean squared error (RMSE) for example-level context Relevance and
Utilization predictions.
- >-
EManual is a question answer dataset comprising consumer electronic
device manuals and realistic questions about them composed by human
annotators. The subset made available at the time of writing amounts to
659 unique questions about the Samsung Smart TV/remote and the
accompanying user manual, segmented into 261 chunks. To form a RAG
dataset, we embed the manual segments into a vector database with OpenAI
embedding and retrieve up to 3 context documents per question from it.
For each
- >-
Table 3: Benchmark evaluation on test splits. Reporting AUROC for
predicting hallucinated responses (Hal), RMSE for predicting Context
Relevance (Rel) and utilization (Util). ∗ indicates statistical
significance at 95% confidence intervals, measured by bootstrap
comparing the top and second-best results. RAGAS and Trulens do not
evaluate Utilization.
GPT-3.5
RAGAS
TruLens
DeBERTA
Dataset
Hal↑↑\uparrow↑
Rel↓↓\downarrow↓
Util↓↓\downarrow↓
Hal↑↑\uparrow↑
Rel↓↓\downarrow↓
- source_sentence: >-
What is the main contribution of Kwiatkowski et al. [2019] in the field of
question answering research?
sentences:
- >-
The sentence_support_information field is a list of objects, one for
each sentence
in the response. Each object MUST have the following fields:
- response_sentence_key: a string identifying the sentence in the
response.
This key is the same as the one used in the response above.
- explanation: a string explaining why the sentence is or is not
supported by the
documents.
- supporting_sentence_keys: keys (e.g. ’0a’) of sentences from the
documents that
- "Kwiatkowski et\_al. [2019]\n\nT.\_Kwiatkowski, J.\_Palomaki, O.\_Redfield, M.\_Collins, A.\_Parikh, C.\_Alberti, D.\_Epstein, I.\_Polosukhin, M.\_Kelcey, J.\_Devlin, K.\_Lee, K.\_N. Toutanova, L.\_Jones, M.-W. Chang, A.\_Dai, J.\_Uszkoreit, Q.\_Le, and S.\_Petrov.\n\n\nNatural questions: a benchmark for question answering research.\n\n\nTransactions of the Association of Computational Linguistics, 2019.\n\n\n\n\nLaurer et\_al. [2022]\n\nM.\_Laurer, W.\_van Atteveldt, A.\_Casas, and K.\_Welbers."
- >-
with consistent annotations. To best represent real-world RAG scenarios,
we vary a number parameters to construct the benchmark: the source
domain, number of context documents, context token length, and the
response generator model Figure 1 illustrates where these variable
parameters fall in the RAG pipeline.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.8571428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9642857142857143
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.8571428571428571
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.32142857142857145
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.8571428571428571
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9642857142857143
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9385586452838898
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9178571428571428
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9178571428571428
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("chelleboyer/llm-evals-2-79b954ef-4798-4994-be72-a88d46b8ecca")
# Run inference
sentences = [
'What is the main contribution of Kwiatkowski et al. [2019] in the field of question answering research?',
'Kwiatkowski et\xa0al. [2019]\n\nT.\xa0Kwiatkowski, J.\xa0Palomaki, O.\xa0Redfield, M.\xa0Collins, A.\xa0Parikh, C.\xa0Alberti, D.\xa0Epstein, I.\xa0Polosukhin, M.\xa0Kelcey, J.\xa0Devlin, K.\xa0Lee, K.\xa0N. Toutanova, L.\xa0Jones, M.-W. Chang, A.\xa0Dai, J.\xa0Uszkoreit, Q.\xa0Le, and S.\xa0Petrov.\n\n\nNatural questions: a benchmark for question answering research.\n\n\nTransactions of the Association of Computational Linguistics, 2019.\n\n\n\n\nLaurer et\xa0al. [2022]\n\nM.\xa0Laurer, W.\xa0van Atteveldt, A.\xa0Casas, and K.\xa0Welbers.',
'The sentence_support_information field is a list of objects, one for each sentence\nin the response. Each object MUST have the following fields:\n- response_sentence_key: a string identifying the sentence in the response.\nThis key is the same as the one used in the response above.\n- explanation: a string explaining why the sentence is or is not supported by the\ndocuments.\n- supporting_sentence_keys: keys (e.g. ’0a’) of sentences from the documents that',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8571 |
cosine_accuracy@3 | 0.9643 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.8571 |
cosine_precision@3 | 0.3214 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.8571 |
cosine_recall@3 | 0.9643 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9386 |
cosine_mrr@10 | 0.9179 |
cosine_map@100 | 0.9179 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 400 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 400 samples:
sentence_0 sentence_1 type string string details - min: 3 tokens
- mean: 21.42 tokens
- max: 53 tokens
- min: 3 tokens
- mean: 93.8 tokens
- max: 200 tokens
- Samples:
sentence_0 sentence_1 What are the key components and criteria used in the TRACe Evaluation Framework within RAGBench?
RAGBench: Explainable Benchmark for Retrieval-Augmented Generation Systems
1 Introduction
2 Related Work
RAG evaluation
Finetuned RAG evaluation models
3 RAGBench Construction
3.1 Component Datasets
Source Domains
Context Token Length
Task Types
Question Sources
Response Generation
Data Splits
3.2 TRACe Evaluation Framework
Definitions
Context Relevance
Context Utilization
Completeness
Adherence
3.3 RAGBench Statistics
3.4 LLM annotatorHow does RAGBench utilize component datasets to construct a benchmark for Retrieval-Augmented Generation systems?
RAGBench: Explainable Benchmark for Retrieval-Augmented Generation Systems
1 Introduction
2 Related Work
RAG evaluation
Finetuned RAG evaluation models
3 RAGBench Construction
3.1 Component Datasets
Source Domains
Context Token Length
Task Types
Question Sources
Response Generation
Data Splits
3.2 TRACe Evaluation Framework
Definitions
Context Relevance
Context Utilization
Completeness
Adherence
3.3 RAGBench Statistics
3.4 LLM annotatorWhat are the key components and findings discussed in the RAGBench Statistics and Case Study sections?
3.3 RAGBench Statistics
3.4 LLM annotator
Alignment with Human Judgements
3.5 RAG Case Study
4 Experiments
4.1 LLM Judge
4.2 Fine-tuned Judge
4.3 Evaluation
5 Results
Estimating Context Relevance is Difficult
6 Conclusion
7 Appendix
7.1 RAGBench Code and Data
7.2 RAGBench Dataset Details
PubMedQA [14]
CovidQA-RAG
HotpotQA [42]
MS Marco [28]
CUAD [12]
DelucionQA [33]
EManual [27]
TechQA [3]
FinQA [6]
TAT-QA [47]
HAGRID [15]
ExpertQA [25] - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 5per_device_eval_batch_size
: 5num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 5per_device_eval_batch_size
: 5per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | cosine_ndcg@10 |
---|---|---|---|
0.625 | 50 | - | 0.9517 |
1.0 | 80 | - | 0.9649 |
1.25 | 100 | - | 0.9649 |
1.875 | 150 | - | 0.9517 |
2.0 | 160 | - | 0.9517 |
2.5 | 200 | - | 0.9386 |
3.0 | 240 | - | 0.9386 |
3.125 | 250 | - | 0.9517 |
3.75 | 300 | - | 0.9386 |
4.0 | 320 | - | 0.9517 |
4.375 | 350 | - | 0.9517 |
5.0 | 400 | - | 0.9517 |
5.625 | 450 | - | 0.9517 |
6.0 | 480 | - | 0.9401 |
6.25 | 500 | 0.3877 | 0.9401 |
6.875 | 550 | - | 0.9386 |
7.0 | 560 | - | 0.9386 |
7.5 | 600 | - | 0.9401 |
8.0 | 640 | - | 0.9401 |
8.125 | 650 | - | 0.9401 |
8.75 | 700 | - | 0.9386 |
9.0 | 720 | - | 0.9386 |
9.375 | 750 | - | 0.9386 |
10.0 | 800 | - | 0.9386 |
Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 2.14.4
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}