SPLADE-BERT-Mini
This is a SPLADE Sparse Encoder model finetuned from prajjwal1/bert-mini using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
Model Details
Model Description
- Model Type: SPLADE Sparse Encoder
- Base model: prajjwal1/bert-mini
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 30522 dimensions
- Similarity Function: Dot Product
- Language: en
- License: mit
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Sparse Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sparse Encoders on Hugging Face
Full Model Architecture
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'BertForMaskedLM'})
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("rasyosef/SPLADE-BERT-Mini")
# Run inference
queries = [
"where is oestrogen produced",
]
documents = [
'Estrogens, in females, are produced primarily by the ovaries, and during pregnancy, the placenta. Follicle-stimulating hormone (FSH) stimulates the ovarian production of estrogens by the granulosa cells of the ovarian follicles and corpora lutea.strogen or oestrogen (see spelling differences) is the primary female sex hormone and is responsible for development and regulation of the female reproductive system and secondary sex characteristics. Estrogen may also refer to any substance, natural or synthetic that mimics the effects of the natural hormone.',
"Making the world better, one answer at a time. Estrogen is produced in the ovaries, primarily the theca (wall) of developing follicles in the ovary, though also to a lesser extent the corpus luteum (remaining out 'shell' which previously contained an egg) and, during certain stages of pregnancy, the placenta.he production of the estrogen in the ovaries is stimulated by the lutenizing hormone. Some estrogens are produced in smaller quantities by liver adrenal glands and brests. Estrogen is produced in the ovaries but if you wish to go back further than that is is based on the cholesterol molecule. ovary.",
'The pituitary gland secretes a hormone which induces the production of estrogen in the ovaries. Estrogens are primarily produced by (and released from) the follicles in the ovaries (the corpus luterum) and the placenta (the organ that connects the developing fetus to the uterine wall).The production of the estrogen in the ovaries is stimulated by the lutenizing hormone.Some estrogens are produced in smaller quantities by liver adrenal glands and brests. Estrogen is produced in the ovaries but if you wish to go back further than that is is based on the cholesterol molecule. ovary.he production of the estrogen in the ovaries is stimulated by the lutenizing hormone. Some estrogens are produced in smaller quantities by liver adrenal glands and brests. Estrogen is produced in the ovaries but if you wish to go back further than that is is based on the cholesterol molecule. ovary.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[17.0112, 13.5808, 13.2221]])
Evaluation
Metrics
Sparse Information Retrieval
- Evaluated with
SparseInformationRetrievalEvaluator
Metric | Value |
---|---|
dot_accuracy@1 | 0.4748 |
dot_accuracy@3 | 0.7852 |
dot_accuracy@5 | 0.882 |
dot_accuracy@10 | 0.9418 |
dot_precision@1 | 0.4748 |
dot_precision@3 | 0.2687 |
dot_precision@5 | 0.1827 |
dot_precision@10 | 0.0986 |
dot_recall@1 | 0.4597 |
dot_recall@3 | 0.772 |
dot_recall@5 | 0.871 |
dot_recall@10 | 0.9357 |
dot_ndcg@10 | 0.7129 |
dot_mrr@10 | 0.6443 |
dot_map@100 | 0.6401 |
query_active_dims | 27.2148 |
query_sparsity_ratio | 0.9991 |
corpus_active_dims | 153.6709 |
corpus_sparsity_ratio | 0.995 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 500,000 training samples
- Columns:
query
,positive
,negative_1
, andnegative_2
- Approximate statistics based on the first 1000 samples:
query positive negative_1 negative_2 type string string string string details - min: 4 tokens
- mean: 9.01 tokens
- max: 32 tokens
- min: 16 tokens
- mean: 78.72 tokens
- max: 230 tokens
- min: 20 tokens
- mean: 76.0 tokens
- max: 251 tokens
- min: 19 tokens
- mean: 76.42 tokens
- max: 222 tokens
- Samples:
query positive negative_1 negative_2 what is download upload speed
Almost every speed test site tests for download speed, upload speed, and the ping rate. The upload rate is always lower than the download rate. This is a configuration set by the local cable carrier — it is not dependent on the user’s bandwidth or Internet speed.he Difference. There is none. Download speed is the rate at which data is transferred from the Internet to the user’s computer. The upload speed is the rate that data is transferred from the user’s computer to the Internet.
Speed Limits. The download speed is typically much faster than the upload speed. The price you pay for Internet access with most devices is based on the maximum number of bytes per second the service provides, although cellular carriers charge by the total bytes transmitted.hristopher Robbins/Photodisc/Getty Images. Internet speed refers to the speed at which you send or receive data from your computer, phone or other device. Download speed is the rate your connection receives data. Upload speed is the number of bytes per second you can send.
If you find that your download or upload speed is not equal to what your Internet service provider promised, there are a couple of easy fixes you can perform. Use a wired connection to the router instead of wireless. Performing a speed test across a wireless connection will always give slower results.he Difference. There is none. Download speed is the rate at which data is transferred from the Internet to the user’s computer. The upload speed is the rate that data is transferred from the user’s computer to the Internet.
what is sdn
CompanyCase Studies. Software-defined networking (SDN) is an approach to network virtualization that seeks to optimize network resources and quickly adapt networks to changing business needs, applications, and traffic.
Historically, networking has been performed through two abstractions, a Data plane and a Control plane. The data plane rapidly processes packets: it looks at the state and packet header, then makes a forwarding decision. The control plane is what puts that forwarding state there.
(Learn how and when to remove these template messages) Software-defined networking (SDN) is an approach to computer networking that allows network administrators to programmatically initialize, control, change, and manage network behavior dynamically via open interfaces and abstraction of lower-level functionality.
can vacuuming every day lessen fleas
Thoroughly and regularly clean areas where you find adult fleas, flea larvae, and flea eggs. Vacuum floors, rugs, carpets, upholstered furniture, and crevices around baseboards and cabinets daily or every other day to remove flea eggs, larvae, and adults.
LIFE CYCLE. Unlike most fleas, adult cat fleas remain on the host where feeding, mating, and egg laying occur. Females lay about 20 to 50 eggs per day. Cat flea eggs are pearly white, oval, and about 1/32 inch long (Figure 3).
I wash my sheets every day , vacuum , shampoo , and even wash the pets , with different shampoo every time and use different sprays every time as I learned fleas become resistant if you constantly use the same all the time . I’m at wits end and I am scared to even enter my house. December 13, 2015 at 12:57 PM #44900.
- Loss:
SpladeLoss
with these parameters:{ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')", "document_regularizer_weight": 0.003, "query_regularizer_weight": 0.005 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 16per_device_eval_batch_size
: 16gradient_accumulation_steps
: 8learning_rate
: 6e-05num_train_epochs
: 6lr_scheduler_type
: cosinewarmup_ratio
: 0.025fp16
: Trueload_best_model_at_end
: Trueoptim
: adamw_torch_fusedpush_to_hub
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 8eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 6e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 6max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.025warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Trueresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsehub_revision
: Nonegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseliger_kernel_config
: Noneeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportionalrouter_mapping
: {}learning_rate_mapping
: {}
Training Logs
Epoch | Step | Training Loss | dot_ndcg@10 |
---|---|---|---|
1.0 | 3907 | 19.5833 | 0.7041 |
2.0 | 7814 | 0.7032 | 0.7125 |
3.0 | 11721 | 0.6323 | 0.7149 |
4.0 | 15628 | 0.5691 | 0.7192 |
5.0 | 19535 | 0.5214 | 0.7128 |
6.0 | 23442 | 0.4996 | 0.7129 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 5.0.0
- Transformers: 4.53.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.6.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
SpladeLoss
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
SparseMultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
FlopsLoss
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for rasyosef/SPLADE-BERT-Mini
Base model
prajjwal1/bert-miniDataset used to train rasyosef/SPLADE-BERT-Mini
Evaluation results
- Dot Accuracy@1 on Unknownself-reported0.475
- Dot Accuracy@3 on Unknownself-reported0.785
- Dot Accuracy@5 on Unknownself-reported0.882
- Dot Accuracy@10 on Unknownself-reported0.942
- Dot Precision@1 on Unknownself-reported0.475
- Dot Precision@3 on Unknownself-reported0.269
- Dot Precision@5 on Unknownself-reported0.183
- Dot Precision@10 on Unknownself-reported0.099
- Dot Recall@1 on Unknownself-reported0.460
- Dot Recall@3 on Unknownself-reported0.772