SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("chelleboyer/llm-mm-good-309e6f79-505b-4c23-8452-37cc854e67df")
# Run inference
sentences = [
    'What metrics does the LLMS (2025) framework introduce to investigate position bias in pairwise comparisons?',
    'Recent studies have further examined position bias in the LLMs-as-judges context.\nFor instance, a framework\xa0(LLMS, 2025) is proposed to investigate position bias in pairwise comparisons, introducing metrics such as repetition stability, position consistency, and preference fairness to better understand how positions affect LLM judgments.\nAnother study\xa0(Zheng et\xa0al., 2023a) explores the limitations of LLMs-as-judges, including position biases, and verifies agreement between LLM judgments and human preferences across multiple benchmarks.\nThese findings underscore the need for robust debiasing strategies to enhance the fairness and reliableness of LLMs-as-judges.',
    'Overconfidence bias\xa0(Khan et\xa0al., 2024; Jung et\xa0al., 2024) in the context of LLMs-as-judges refers to the tendency of models to exhibit an inflated level of confidence in their judgments, often resulting in overly assertive evaluations that may not accurately reflect the true reliability of the answer. This bias is particularly concerning in evaluative contexts, as it can lead LLMs-as-judges to overstate the correctness of certain outputs, compromising the objectivity and dependability of assessments.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.93
cosine_accuracy@3 0.99
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.93
cosine_precision@3 0.33
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.93
cosine_recall@3 0.99
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9704
cosine_mrr@10 0.9603
cosine_map@100 0.9603

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,334 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 5 tokens
    • mean: 23.12 tokens
    • max: 72 tokens
    • min: 3 tokens
    • mean: 132.04 tokens
    • max: 306 tokens
  • Samples:
    sentence_0 sentence_1
    What are the main components of the evaluation function ( E ) as described in the preliminaries section? LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods
















    1 Introduction

    2 PRELIMINARIES

    2.1 Evaluation Function E𝐸Eitalic_E

    2.2 Evaluation Input

    2.2.1 Evaluation Type 𝒯𝒯\mathcal{T}caligraphic_T
    2.2.2 Evaluation Criteria 𝒞𝒞\mathcal{C}caligraphic_C.
    2.2.3 Evaluation References ℛℛ\mathcal{R}caligraphic_R.


    2.3 Evaluation Output



    3 Functionality


    3.1 Performance Evaluation

    3.1.1 Responses Evaluation
    3.1.2 Model Evaluation



    3.2 Model Enhancement

    3.2.1 Reward Modeling During Training
    3.2.2 Acting as Verifier During Inference
    3.2.3 Feedback for Refinement



    3.3 Data Construction

    3.3.1 Data Annotation
    3.3.2 Data Synthesize





    4 Methodology
    How do LLMs contribute to model enhancement according to the functionalities outlined in the survey? LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods
















    1 Introduction

    2 PRELIMINARIES

    2.1 Evaluation Function E𝐸Eitalic_E

    2.2 Evaluation Input

    2.2.1 Evaluation Type 𝒯𝒯\mathcal{T}caligraphic_T
    2.2.2 Evaluation Criteria 𝒞𝒞\mathcal{C}caligraphic_C.
    2.2.3 Evaluation References ℛℛ\mathcal{R}caligraphic_R.


    2.3 Evaluation Output



    3 Functionality


    3.1 Performance Evaluation

    3.1.1 Responses Evaluation
    3.1.2 Model Evaluation



    3.2 Model Enhancement

    3.2.1 Reward Modeling During Training
    3.2.2 Acting as Verifier During Inference
    3.2.3 Feedback for Refinement



    3.3 Data Construction

    3.3.1 Data Annotation
    3.3.2 Data Synthesize





    4 Methodology
    What are the different approaches discussed under the Single-LLM System methodology? 4 Methodology


    4.1 Single-LLM System

    4.1.1 Prompt-based
    4.1.2 Tuning-based
    4.1.3 Post-processing



    4.2 Multi-LLM System

    4.2.1 Communication
    4.2.2 Aggregation


    4.3 Human-AI Collaboration System



    5 Application

    5.1 General
    5.2 Multimodal
    5.3 Medical
    5.4 Legal
    5.5 Financial
    5.6 Education
    5.7 Information Retrieval

    5.8 Others

    5.8.1 Soft Engineering
    5.8.2 Biology
    5.8.3 Social Science





    6 Meta-evaluation


    6.1 Benchmarks

    6.1.1 Code Generation
    6.1.2 Machine Translation
    6.1.3 Text Summarization
    6.1.4 Dialogue Generation
    6.1.5 Automatic Story Generation
    6.1.6 Values Alignment
    6.1.7 Recommendation
    6.1.8 Search
    6.1.9 Comprehensive Data



    6.2 Metric
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 50
  • per_device_eval_batch_size: 50
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 50
  • per_device_eval_batch_size: 50
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 27 0.9697
1.8519 50 0.9788
2.0 54 0.9775
3.0 81 0.9741
3.7037 100 0.9791
4.0 108 0.9741
5.0 135 0.9782
5.5556 150 0.9782
6.0 162 0.9782
7.0 189 0.9782
7.4074 200 0.9741
8.0 216 0.9741
9.0 243 0.9704
9.2593 250 0.9704
10.0 270 0.9704

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 2.14.4
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
2
Safetensors
Model size
334M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for chelleboyer/llm-mm-good-309e6f79-505b-4c23-8452-37cc854e67df

Finetuned
(167)
this model

Evaluation results