SPLADE-BERT-Mini

This is a SPLADE Sparse Encoder model finetuned from prajjwal1/bert-mini using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.

Model Details

Model Description

  • Model Type: SPLADE Sparse Encoder
  • Base model: prajjwal1/bert-mini
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 30522 dimensions
  • Similarity Function: Dot Product
  • Language: en
  • License: mit

Model Sources

Full Model Architecture

SparseEncoder(
  (0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
  (1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SparseEncoder

# Download from the 🤗 Hub
model = SparseEncoder("rasyosef/SPLADE-BERT-Mini")
# Run inference
queries = [
    "where is oestrogen produced",
]
documents = [
    'Estrogens, in females, are produced primarily by the ovaries, and during pregnancy, the placenta. Follicle-stimulating hormone (FSH) stimulates the ovarian production of estrogens by the granulosa cells of the ovarian follicles and corpora lutea.strogen or oestrogen (see spelling differences) is the primary female sex hormone and is responsible for development and regulation of the female reproductive system and secondary sex characteristics. Estrogen may also refer to any substance, natural or synthetic that mimics the effects of the natural hormone.',
    "Making the world better, one answer at a time. Estrogen is produced in the ovaries, primarily the theca (wall) of developing follicles in the ovary, though also to a lesser extent the corpus luteum (remaining out 'shell' which previously contained an egg) and, during certain stages of pregnancy, the placenta.he production of the estrogen in the ovaries is stimulated by the lutenizing hormone. Some estrogens are produced in smaller quantities by liver adrenal glands and brests. Estrogen is produced in the ovaries but if you wish to go back further than that is is based on the cholesterol molecule. ovary.",
    'The pituitary gland secretes a hormone which induces the production of estrogen in the ovaries. Estrogens are primarily produced by (and released from) the follicles in the ovaries (the corpus luterum) and the placenta (the organ that connects the developing fetus to the uterine wall).The production of the estrogen in the ovaries is stimulated by the lutenizing hormone.Some estrogens are produced in smaller quantities by liver adrenal glands and brests. Estrogen is produced in the ovaries but if you wish to go back further than that is is based on the cholesterol molecule. ovary.he production of the estrogen in the ovaries is stimulated by the lutenizing hormone. Some estrogens are produced in smaller quantities by liver adrenal glands and brests. Estrogen is produced in the ovaries but if you wish to go back further than that is is based on the cholesterol molecule. ovary.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]

# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[17.0112, 13.5808, 13.2221]])

Evaluation

Metrics

Sparse Information Retrieval

Metric Value
dot_accuracy@1 0.4748
dot_accuracy@3 0.7852
dot_accuracy@5 0.882
dot_accuracy@10 0.9418
dot_precision@1 0.4748
dot_precision@3 0.2687
dot_precision@5 0.1827
dot_precision@10 0.0986
dot_recall@1 0.4597
dot_recall@3 0.772
dot_recall@5 0.871
dot_recall@10 0.9357
dot_ndcg@10 0.7129
dot_mrr@10 0.6443
dot_map@100 0.6401
query_active_dims 27.2148
query_sparsity_ratio 0.9991
corpus_active_dims 153.6709
corpus_sparsity_ratio 0.995

Training Details

Training Dataset

Unnamed Dataset

  • Size: 500,000 training samples
  • Columns: query, positive, negative_1, and negative_2
  • Approximate statistics based on the first 1000 samples:
    query positive negative_1 negative_2
    type string string string string
    details
    • min: 4 tokens
    • mean: 9.01 tokens
    • max: 32 tokens
    • min: 16 tokens
    • mean: 78.72 tokens
    • max: 230 tokens
    • min: 20 tokens
    • mean: 76.0 tokens
    • max: 251 tokens
    • min: 19 tokens
    • mean: 76.42 tokens
    • max: 222 tokens
  • Samples:
    query positive negative_1 negative_2
    what is download upload speed Almost every speed test site tests for download speed, upload speed, and the ping rate. The upload rate is always lower than the download rate. This is a configuration set by the local cable carrier — it is not dependent on the user’s bandwidth or Internet speed.he Difference. There is none. Download speed is the rate at which data is transferred from the Internet to the user’s computer. The upload speed is the rate that data is transferred from the user’s computer to the Internet. Speed Limits. The download speed is typically much faster than the upload speed. The price you pay for Internet access with most devices is based on the maximum number of bytes per second the service provides, although cellular carriers charge by the total bytes transmitted.hristopher Robbins/Photodisc/Getty Images. Internet speed refers to the speed at which you send or receive data from your computer, phone or other device. Download speed is the rate your connection receives data. Upload speed is the number of bytes per second you can send. If you find that your download or upload speed is not equal to what your Internet service provider promised, there are a couple of easy fixes you can perform. Use a wired connection to the router instead of wireless. Performing a speed test across a wireless connection will always give slower results.he Difference. There is none. Download speed is the rate at which data is transferred from the Internet to the user’s computer. The upload speed is the rate that data is transferred from the user’s computer to the Internet.
    what is sdn CompanyCase Studies. Software-defined networking (SDN) is an approach to network virtualization that seeks to optimize network resources and quickly adapt networks to changing business needs, applications, and traffic. Historically, networking has been performed through two abstractions, a Data plane and a Control plane. The data plane rapidly processes packets: it looks at the state and packet header, then makes a forwarding decision. The control plane is what puts that forwarding state there. (Learn how and when to remove these template messages) Software-defined networking (SDN) is an approach to computer networking that allows network administrators to programmatically initialize, control, change, and manage network behavior dynamically via open interfaces and abstraction of lower-level functionality.
    can vacuuming every day lessen fleas Thoroughly and regularly clean areas where you find adult fleas, flea larvae, and flea eggs. Vacuum floors, rugs, carpets, upholstered furniture, and crevices around baseboards and cabinets daily or every other day to remove flea eggs, larvae, and adults. LIFE CYCLE. Unlike most fleas, adult cat fleas remain on the host where feeding, mating, and egg laying occur. Females lay about 20 to 50 eggs per day. Cat flea eggs are pearly white, oval, and about 1/32 inch long (Figure 3). I wash my sheets every day , vacuum , shampoo , and even wash the pets , with different shampoo every time and use different sprays every time as I learned fleas become resistant if you constantly use the same all the time . I’m at wits end and I am scared to even enter my house. December 13, 2015 at 12:57 PM #44900.
  • Loss: SpladeLoss with these parameters:
    {
        "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
        "document_regularizer_weight": 0.003,
        "query_regularizer_weight": 0.005
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 8
  • learning_rate: 6e-05
  • num_train_epochs: 6
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.025
  • fp16: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • push_to_hub: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 8
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 6e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 6
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.025
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: True
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional
  • router_mapping: {}
  • learning_rate_mapping: {}

Training Logs

Epoch Step Training Loss dot_ndcg@10
1.0 3907 19.5833 0.7041
2.0 7814 0.7032 0.7125
3.0 11721 0.6323 0.7149
4.0 15628 0.5691 0.7192
5.0 19535 0.5214 0.7128
6.0 23442 0.4996 0.7129
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 5.0.0
  • Transformers: 4.53.1
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.6.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

SpladeLoss

@misc{formal2022distillationhardnegativesampling,
      title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
      author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
      year={2022},
      eprint={2205.04733},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2205.04733},
}

SparseMultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

FlopsLoss

@article{paria2020minimizing,
    title={Minimizing flops to learn efficient sparse representations},
    author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
    journal={arXiv preprint arXiv:2004.05665},
    year={2020}
}
Downloads last month
15
Safetensors
Model size
11.2M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rasyosef/SPLADE-BERT-Mini

Finetuned
(8)
this model

Dataset used to train rasyosef/SPLADE-BERT-Mini

Evaluation results