SentenceTransformer based on BAAI/bge-small-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-small-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Fine-Tuned BGE-Small Model for Q&A

This is a BAAI/bge-small-en-v1.5 model that has been fine-tuned for a specific Question & Answering task using the MultipleNegativesRankingLoss in the sentence-transformers library.

It has been trained on a private dataset of 100,000+ question-answer pairs. Its primary purpose is to be the retriever model in a Retrieval-Augmented Generation (RAG) system. It excels at mapping questions to the passages that contain their answers.

How to Use (Practical Inference Example)

The primary use case is to find the most relevant passage for a given query.

from sentence_transformers import SentenceTransformer, util

# Load the fine-tuned model from the Hub
model_id = "srinivasanAI/bge-small-my-qna-model" # Replace with your model ID
model = SentenceTransformer(model_id)

# The BGE model requires a specific instruction for retrieval queries
instruction = "Represent this sentence for searching relevant passages: "

# 1. Define your query and your potential answers (passages)
query = instruction + "What is the powerhouse of the cell?"

passages = [
    "Mitochondria are organelles that act like a digestive system and are often called the powerhouse of the cell.",
    "The cell wall is a rigid layer that provides structural support to plant cells.",
    "The sun is a star at the center of the Solar System."
]

# 2. Encode the single query and the list of passages separately
query_embedding = model.encode(query)
passage_embeddings = model.encode(passages)

# 3. Calculate the similarity between the single query and all passages
similarities = util.cos_sim(query_embedding, passage_embeddings)

# 4. Print the results
print(f"Query: {query.replace(instruction, '')}\\n")
for i, passage in enumerate(passages):
    print(f"Similarity: {similarities[0][i]:.4f} | Passage: {passage}")

Training Details

Training Dataset

Unnamed Dataset

  • Size: 100,231 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 18 tokens
    • mean: 19.69 tokens
    • max: 31 tokens
    • min: 16 tokens
    • mean: 139.68 tokens
    • max: 512 tokens
  • Samples:
    sentence_0 sentence_1
    Represent this sentence for searching relevant passages: where did strangers prey at night take place The Strangers: Prey at Night In a secluded trailer park in Salem, Arkansas, the three masked killers, The Walker family — Dollface, Pin Up Girl, and the Man in the Mask — arrive. Dollface kills a female occupant and then lies down in bed next to the woman's sleeping husband.
    Represent this sentence for searching relevant passages: what is the average height of the highest peaks in the drakensberg mountain range Drakensberg During the past 20 million years, further massive upliftment, especially in the East, has taken place in Southern Africa. As a result, most of the plateau lies above 1,000 m (3,300 ft) despite the extensive erosion. The plateau is tilted such that its highest point is in the east, and it slopes gently downwards towards the west and south. The elevation of the edge of the eastern escarpments is typically in excess of 2,000 m (6,600 ft). It reaches its highest point (over 3,000 m (9,800 ft)) where the escarpment forms part of the international border between Lesotho and the South African province of KwaZulu-Natal.[5][8]
    Represent this sentence for searching relevant passages: name the two epics of india which are woven around with legends Indian epic poetry Indian epic poetry is the epic poetry written in the Indian subcontinent, traditionally called Kavya (or Kāvya; Sanskrit: काव्य, IAST: kāvyá). The Ramayana and the Mahabharata, which were originally composed in Sanskrit and later translated into many other Indian languages, and The Five Great Epics of Tamil Literature and Sangam literature are some of the oldest surviving epic poems ever written.[1]
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.1596 500 0.0556
0.3192 1000 0.0245
0.4788 1500 0.0236
0.6384 2000 0.0179
0.7980 2500 0.0202
0.9575 3000 0.0184

Framework Versions

  • Python: 3.11.13
  • Sentence Transformers: 4.1.0
  • Transformers: 4.53.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.9.0
  • Datasets: 4.0.0
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
10
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for srinivasanAI/bge-small-my-qna-model

Finetuned
(200)
this model