SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("s4um1l/saumil-ft-633e5453-0b3a-4693-9108-c6cc8a87730f")
# Run inference
sentences = [
    'What are the key components that should be included in the campaign performance report during the wrap-up phase?',
    "**B. Reporting & Analysis (Wrap-up):**\n  [ ] Compile a comprehensive campaign performance report.\n  [ ] Analyze what worked well and what didn't.\n  [ ] Calculate ROI and cost per acquisition/lead.\n  [ ] Document key learnings and insights for future campaigns.\n  [ ] Share report with stakeholders.\n\n**C. Housekeeping:**\n  [ ] Archive campaign assets and documentation.\n  [ ] Update budget tracking.\n  [ ] Send thank-you notes or payments to influencers/partners (if applicable).",
    '**1. Enhance Onboarding & First Purchase Experience:**\n   - **Welcome Email Series:** Educate new subscribers/customers about the brand story, unique value proposition (what makes us different and better), and product range. Include a modest first-time purchase incentive (e.g., 10% off, free sample with first order). The series could be 3-5 emails spaced over a week.\n   - **Post-Purchase Communication:** Send timely order/shipping confirmations with tracking links. Follow up 5-7 days after delivery to check satisfaction, offer support, and solicit product reviews or social shares.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9583
cosine_accuracy@3 1.0
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.9583
cosine_precision@3 0.3333
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.9583
cosine_recall@3 1.0
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9846
cosine_mrr@10 0.9792
cosine_map@100 0.9792

Training Details

Training Dataset

Unnamed Dataset

  • Size: 76 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 76 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 17.87 tokens
    • max: 32 tokens
    • min: 18 tokens
    • mean: 132.0 tokens
    • max: 196 tokens
  • Samples:
    sentence_0 sentence_1
    What is the standard discount rate offered to employees on company products? # Company Policy: Employee Discounts

    1. Policy Statement:
    Actively employed staff are entitled to a discount on company products as a benefit of employment.

    2. Discount Rate:
    The standard employee discount is 20% off the retail price.

    3. Eligibility & Verification:
    - Eligibility: All full-time and part-time employees currently employed by the company.
    - Verification: Employees must use their official company email address when placing orders. The discount code will be provided upon verification of employment status by HR or the direct manager.
    - Code Usage: Each employee receives a unique, non-transferable discount code.
    How must employees verify their eligibility to receive the discount code? # Company Policy: Employee Discounts

    1. Policy Statement:
    Actively employed staff are entitled to a discount on company products as a benefit of employment.

    2. Discount Rate:
    The standard employee discount is 20% off the retail price.

    3. Eligibility & Verification:
    - Eligibility: All full-time and part-time employees currently employed by the company.
    - Verification: Employees must use their official company email address when placing orders. The discount code will be provided upon verification of employment status by HR or the direct manager.
    - Code Usage: Each employee receives a unique, non-transferable discount code.
    Who is eligible to use the employee discount according to the usage guidelines? 4. Usage Guidelines:
    - Personal Use: The discount is intended for personal use by the employee and their immediate family (spouse, domestic partner, children residing in the same household).
    - Resale Prohibited: Items purchased with the employee discount may not be resold.
    - Frequency Limit: A reasonable usage limit may be monitored (e.g., maximum $2000 in discounted purchases per calendar year, subject to review).
    - Combination: Cannot be combined with other promotional offers, sales, or discounts unless explicitly stated.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            384,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 10
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 8 0.9846
2.0 16 0.9923
3.0 24 0.9923
4.0 32 0.9923
5.0 40 0.9923
6.0 48 0.9923
6.25 50 0.9923
7.0 56 0.9923
8.0 64 0.9923
9.0 72 0.9923
10.0 80 0.9923
1.0 8 0.9846
2.0 16 0.9846
3.0 24 0.9846
4.0 32 0.9923
5.0 40 0.9846
6.0 48 0.9846
6.25 50 0.9846
7.0 56 0.9846
8.0 64 0.9846
9.0 72 0.9846
10.0 80 0.9846

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 4.1.0
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
20
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for s4um1l/saumil-ft-633e5453-0b3a-4693-9108-c6cc8a87730f

Finetuned
(419)
this model

Evaluation results