SentenceTransformer based on sentence-transformers/all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the ๐Ÿค— Hub
model = SentenceTransformer("Shipmaster1/finetuned_mpnet_matryoshka_mnr")
# Run inference
sentences = [
    '"What are some business opportunities where implementing AI or building an agent could lead to long-term competitiveness and success in the next two to three years?"',
    "a cost Advantage there's a Time Advantage but I'm not I mean I would be really curious to put that back to you like what do you see as like a business that you can Implement AI or build an agent that can still be highly competitive and successful in two or three years because I'm increasingly of the",
    "readers and book bingers are female I think having female oriented lit RPGs could be a fascinating subcategory and differentiat cuz I was scrolling through and every single book cover is like male enters Game World defeats Dragon okay where's the female Market in that is is interesting and then a",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Datasets

Unnamed Dataset

  • Size: 41,472 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 12 tokens
    • mean: 26.2 tokens
    • max: 66 tokens
    • min: 4 tokens
    • mean: 67.7 tokens
    • max: 91 tokens
  • Samples:
    sentence_0 sentence_1
    What are some common misconceptions about the concept of strategy? but I do think it's a problem which is you hear strategy and I think um everyone has a different definition of it which is already a problem um and a lot of people conjure up these academic versions of strategies which I agree is not very useful maybe not useful at all and work and and uh um
    "What are some ways to afford a subscription service and potentially get a free trial to salvage your business without compromising human relationships with co-founders?" you can that you can afford the subscription to the service and then even then they might even give you a week for free or whatever to try and Salvage your business I think the I don't want to remove Humanity from co-founder relationships obviously that is not something I'm looking for ideally this
    What are the benefits of using email-based courses in your everyday routine to enhance social content and establish trust with your audience? like wow I'm making this a part of my everyday routine and I don't want to lose it so why I'm a big believer in things like email based courses is gives you the carrot that you can use on social content and also use paid ads against it and then it also gives you the ability to earn trust with this
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Unnamed Dataset

  • Size: 41,472 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 13 tokens
    • mean: 26.71 tokens
    • max: 87 tokens
    • min: 9 tokens
    • mean: 67.98 tokens
    • max: 90 tokens
  • Samples:
    sentence_0 sentence_1
    "Can you find videos demonstrating how to use email automation for researching vegan caterers in Dallas County?" I'm gonna have all kinds of emails from these Caterers for sure I mean and you could also have it do research for you right it's like hey I want you to find um vegan caters only in Dallas County um like the more you're prompt the more you prompted to do the more you save time I'm gonna ask why the
    What are the potential implications of having AI coding teams that could replace entire human teams in organizations, particularly in the context of a trillion-dollar organization in the next 20 years? have multi- aent coding teams that basically replace entire teams of humans. Right now we have even a you imagine a a single person trillion dollar organization or something that might exist in the next 20 years. What does that look like and what are agents doing behind the scenes? You know, we and
    How can excess power generation from solar panels on a house be shared with neighbors instead of being sent far away from the grid? model if your house has solar panels up top and you have Excess power generation or capture you can lend it out to your neighbor rather than needing to take it far away from the grid but in strict technical terms the general consensus advocated by Folks at Intel and meta is a thousand Factor
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 5
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.1929 500 0.0878
0.3858 1000 0.0375
0.5787 1500 0.029
0.7716 2000 0.0238
0.9645 2500 0.0143
1.1574 3000 0.0086
1.3503 3500 0.0063
1.5432 4000 0.0068
1.7361 4500 0.005
1.9290 5000 0.0049
2.1219 5500 0.005
2.3148 6000 0.003
2.5077 6500 0.0061
2.7006 7000 0.0053
2.8935 7500 0.0055
3.0864 8000 0.0058
3.2793 8500 0.0048
3.4722 9000 0.0034
3.6651 9500 0.0042
3.8580 10000 0.0031
4.0509 10500 0.0044
4.2438 11000 0.005
4.4367 11500 0.0053
4.6296 12000 0.0039
4.8225 12500 0.0068

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
Downloads last month
9
Safetensors
Model size
109M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Shipmaster1/finetuned_mpnet_matryoshka_mnr

Finetuned
(281)
this model

Space using Shipmaster1/finetuned_mpnet_matryoshka_mnr 1