ModernBERT-small-Retrieval-BEIR-Tuned
This is a sentence-transformers model trained on the msmarco, gooaq and natural_questions datasets. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
This model is based on the wide architecture of johnnyboycurtis/ModernBERT-small
small_modernbert_config = ModernBertConfig(
hidden_size=384, # A common dimension for small embedding models
num_hidden_layers=12, # Significantly fewer layers than the base's 22
num_attention_heads=6, # Must be a divisor of hidden_size
intermediate_size=1536, # 4 * hidden_size -- VERY WIDE!!
max_position_embeddings=1024, # Max sequence length for the model; originally 8192
)
model = ModernBertModel(modernbert_small_config)
Model Details
Model Description
- Model Type: Sentence Transformer
- Maximum Sequence Length: 1024 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
- Training Datasets:
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'how did triangular trade benefit european colonies in the americas',
'Triangular trade New England also benefited from the trade, as many merchants from New England, especially the state of Rhode Island, replaced the role of Europe in the triangle. New England also made rum from the Caribbean sugar and molasses, which it shipped to Africa as well as within the New World.[7] Yet, the "triangle trade" as considered in relation to New England was a piecemeal operation. No New England traders are known to have completed a sequential circuit of the full triangle, which took a calendar year on average, according to historian Clifford Shipton.[8] The concept of the New England Triangular trade was first suggested, inconclusively, in an 1866 book by George H. Moore, was picked up in 1872 by historian George C. Mason, and reached full consideration from a lecture in 1887 by American businessman and historian William B. Weeden.[9] The song "Molasses to Rum" from the musical 1776 vividly describes this form of the triangular trade.',
"Ohm's law Ohm's law states that the current through a conductor between two points is directly proportional to the voltage across the two points. Introducing the constant of proportionality, the resistance,[1] one arrives at the usual mathematical equation that describes this relationship:[2]",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.4886, -0.0460],
# [ 0.4886, 1.0000, -0.0096],
# [-0.0460, -0.0096, 1.0000]])
Evaluation
Metrics
Information Retrieval
- Datasets:
NanoMSMARCO
,NanoNQ
andNanoHotpotQA
- Evaluated with
InformationRetrievalEvaluator
Metric | NanoMSMARCO | NanoNQ | NanoHotpotQA |
---|---|---|---|
cosine_accuracy@1 | 0.12 | 0.22 | 0.64 |
cosine_accuracy@3 | 0.26 | 0.46 | 0.7 |
cosine_accuracy@5 | 0.34 | 0.56 | 0.76 |
cosine_accuracy@10 | 0.54 | 0.58 | 0.86 |
cosine_precision@1 | 0.12 | 0.22 | 0.64 |
cosine_precision@3 | 0.0867 | 0.1533 | 0.2933 |
cosine_precision@5 | 0.068 | 0.112 | 0.208 |
cosine_precision@10 | 0.054 | 0.058 | 0.126 |
cosine_recall@1 | 0.12 | 0.2 | 0.32 |
cosine_recall@3 | 0.26 | 0.42 | 0.44 |
cosine_recall@5 | 0.34 | 0.52 | 0.52 |
cosine_recall@10 | 0.54 | 0.54 | 0.63 |
cosine_ndcg@10 | 0.3043 | 0.3813 | 0.5626 |
cosine_mrr@10 | 0.2326 | 0.3473 | 0.6951 |
cosine_map@100 | 0.25 | 0.3372 | 0.4763 |
Nano BEIR
- Dataset:
NanoBEIR_mean
- Evaluated with
NanoBEIREvaluator
with these parameters:{ "dataset_names": [ "MSMARCO", "NQ", "HotpotQA" ] }
Metric | Value |
---|---|
cosine_accuracy@1 | 0.3267 |
cosine_accuracy@3 | 0.4733 |
cosine_accuracy@5 | 0.5533 |
cosine_accuracy@10 | 0.66 |
cosine_precision@1 | 0.3267 |
cosine_precision@3 | 0.1778 |
cosine_precision@5 | 0.1293 |
cosine_precision@10 | 0.0793 |
cosine_recall@1 | 0.2133 |
cosine_recall@3 | 0.3733 |
cosine_recall@5 | 0.46 |
cosine_recall@10 | 0.57 |
cosine_ndcg@10 | 0.4161 |
cosine_mrr@10 | 0.425 |
cosine_map@100 | 0.3545 |
Training Details
Training Datasets
msmarco
- Dataset: msmarco at 28ff31e
- Size: 502,939 training samples
- Columns:
anchor
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 4 tokens
- mean: 9.17 tokens
- max: 29 tokens
- min: 18 tokens
- mean: 79.48 tokens
- max: 211 tokens
- min: 17 tokens
- mean: 81.21 tokens
- max: 230 tokens
- Samples:
anchor positive negative what are the liberal arts?
liberal arts. 1. the academic course of instruction at a college intended to provide general knowledge and comprising the arts, humanities, natural sciences, and social sciences, as opposed to professional or technical subjects.
liberal arts definition. The areas of learning that cultivate general intellectual ability rather than technical or professional skills. The term liberal arts is often used as a synonym for humanities, although the liberal arts also include the sciences. The word liberal comes from the Latin liberalis, meaning suitable for a free man, as opposed to a slave.
what is the mechanism of action of fibrinolytic or thrombolytic drugs?
Baillière's Clinical Haematology. 6 Mechanism of action of the thrombolytic agents. 6 Mechanism of action of the thrombolytic agents JEFFREY I. WEITZ Fibrin formed during the haemostatic, inflammatory or tissue repair process serves a temporary role, and must be degraded to restore normal tissue function and structure.
Fibrinolytic drug. Fibrinolytic drug, also called thrombolytic drug, any agent that is capable of stimulating the dissolution of a blood clot (thrombus). Fibrinolytic drugs work by activating the so-called fibrinolytic pathway.
what is normal plat count
78 Followers. A. Platelets are the tiny blood cells that help stop bleeding by binding together to form a clump or plug at sites of injury inside blood vessels. A normal platelet count is between 150,000 and 450,000 platelets per microliter (one-millionth of a liter, abbreviated mcL).The average platelet count is 237,000 per mcL in men and 266,000 per mcL in women.8 Followers. A. Platelets are the tiny blood cells that help stop bleeding by binding together to form a clump or plug at sites of injury inside blood vessels. A normal platelet count is between 150,000 and 450,000 platelets per microliter (one-millionth of a liter, abbreviated mcL).
Platelet Count. A platelet count is part of a complete blood count test. It is ordered when a patient experiences unexplainable bruises or when small cuts and wounds take longer time to heal. A normal platelet count of an average person is 150,000 to 450,000 platelets per microliter of blood.Others may have abnormal platelet count but it doesnât indicate any abnormality.latelets, red cells, and plasma are the major components that form the human blood. Platelets are irregular shaped molecules with a colorless body and sticky surface that forms into clots to help stop the bleeding. When a personâs normal platelet count is compromised that personâs life might be put in danger.
- Loss:
CachedMultipleNegativesSymmetricRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "mini_batch_size": 64 }
gooaq
- Dataset: gooaq at b089f72
- Size: 3,012,496 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 8 tokens
- mean: 12.19 tokens
- max: 22 tokens
- min: 13 tokens
- mean: 58.34 tokens
- max: 124 tokens
- Samples:
anchor positive is toprol xl the same as metoprolol?
Metoprolol succinate is also known by the brand name Toprol XL. It is the extended-release form of metoprolol. Metoprolol succinate is approved to treat high blood pressure, chronic chest pain, and congestive heart failure.
are you experienced cd steve hoffman?
The Are You Experienced album was apparently mastered from the original stereo UK master tapes (according to Steve Hoffman - one of the very few who has heard both the master tapes and the CDs produced over the years). ... The CD booklets were a little sparse, but at least they stayed true to the album's original design.
how are babushka dolls made?
Matryoshka dolls are made of wood from lime, balsa, alder, aspen, and birch trees; lime is probably the most common wood type. ... After cutting, the trees are stripped of most of their bark, although a few inner rings of bark are left to bind the wood and keep it from splitting.
- Loss:
CachedMultipleNegativesSymmetricRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "mini_batch_size": 64 }
natural_questions
- Dataset: natural_questions at f9e894e
- Size: 100,231 training samples
- Columns:
anchor
andpositive
- Approximate statistics based on the first 1000 samples:
anchor positive type string string details - min: 10 tokens
- mean: 12.47 tokens
- max: 23 tokens
- min: 17 tokens
- mean: 138.32 tokens
- max: 556 tokens
- Samples:
anchor positive when did richmond last play in a preliminary final
Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tig...
who sang what in the world's come over you
Jack Scott (singer) At the beginning of 1960, Scott again changed record labels, this time to Top Rank Records.[1] He then recorded four Billboard Hot 100 hits – "What in the World's Come Over You" (#5), "Burning Bridges" (#3) b/w "Oh Little One" (#34), and "It Only Happened Yesterday" (#38).[1] "What in the World's Come Over You" was Scott's second gold disc winner.[6] Scott continued to record and perform during the 1960s and 1970s.[1] His song "You're Just Gettin' Better" reached the country charts in 1974.[1] In May 1977, Scott recorded a Peel session for BBC Radio 1 disc jockey, John Peel.
who produces the most wool in the world
Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets.
- Loss:
CachedMultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim", "mini_batch_size": 64 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 256weight_decay
: 0.01lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truebf16_full_eval
: Trueload_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 256per_device_eval_batch_size
: 8per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.01adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Truefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsehub_revision
: Nonegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseliger_kernel_config
: Noneeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportionalrouter_mapping
: {}learning_rate_mapping
: {}
Training Logs
Epoch | Step | Training Loss | NanoMSMARCO_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
---|---|---|---|---|---|---|
-1 | -1 | - | 0.2890 | 0.3652 | 0.5065 | 0.3869 |
0.0708 | 1000 | 1.0794 | - | - | - | - |
0.1416 | 2000 | 1.007 | 0.2944 | 0.3596 | 0.5221 | 0.3920 |
0.2124 | 3000 | 0.9012 | - | - | - | - |
0.2832 | 4000 | 0.8323 | 0.3070 | 0.3797 | 0.5284 | 0.4050 |
0.3540 | 5000 | 0.7548 | - | - | - | - |
0.4248 | 6000 | 0.6963 | 0.2909 | 0.3768 | 0.5560 | 0.4079 |
0.4956 | 7000 | 0.6655 | - | - | - | - |
0.5664 | 8000 | 0.6235 | 0.3049 | 0.3778 | 0.5497 | 0.4108 |
0.6372 | 9000 | 0.6202 | - | - | - | - |
0.7080 | 10000 | 0.6276 | 0.3072 | 0.3778 | 0.5613 | 0.4154 |
0.7788 | 11000 | 0.6101 | - | - | - | - |
0.8496 | 12000 | 0.6016 | 0.3049 | 0.3756 | 0.5635 | 0.4147 |
0.9204 | 13000 | 0.6063 | - | - | - | - |
0.9912 | 14000 | 0.5905 | 0.3043 | 0.3813 | 0.5626 | 0.4161 |
1.0619 | 15000 | 0.5734 | - | - | - | - |
1.1327 | 16000 | 0.581 | 0.3119 | 0.3764 | 0.5555 | 0.4146 |
1.2035 | 17000 | 0.5744 | - | - | - | - |
1.2743 | 18000 | 0.5769 | 0.3121 | 0.3682 | 0.5566 | 0.4123 |
1.3451 | 19000 | 0.5773 | - | - | - | - |
1.4159 | 20000 | 0.5767 | 0.3132 | 0.3656 | 0.5602 | 0.4130 |
1.4867 | 21000 | 0.5662 | - | - | - | - |
1.5575 | 22000 | 0.5662 | 0.3204 | 0.3656 | 0.5557 | 0.4139 |
1.6283 | 23000 | 0.5586 | - | - | - | - |
1.6991 | 24000 | 0.5659 | 0.3209 | 0.3664 | 0.5599 | 0.4157 |
1.7699 | 25000 | 0.578 | - | - | - | - |
1.8407 | 26000 | 0.5749 | 0.3132 | 0.3656 | 0.5599 | 0.4129 |
1.9115 | 27000 | 0.5845 | - | - | - | - |
1.9823 | 28000 | 0.5769 | 0.3132 | 0.3664 | 0.5611 | 0.4136 |
2.0531 | 29000 | 0.5714 | - | - | - | - |
2.1239 | 30000 | 0.5696 | 0.3132 | 0.3673 | 0.5606 | 0.4137 |
2.1947 | 31000 | 0.568 | - | - | - | - |
2.2655 | 32000 | 0.5767 | 0.3209 | 0.3664 | 0.5602 | 0.4158 |
2.3363 | 33000 | 0.5785 | - | - | - | - |
2.4071 | 34000 | 0.5666 | 0.3206 | 0.3664 | 0.5604 | 0.4158 |
2.4779 | 35000 | 0.5608 | - | - | - | - |
2.5487 | 36000 | 0.5563 | 0.3206 | 0.3656 | 0.5602 | 0.4155 |
2.6195 | 37000 | 0.5768 | - | - | - | - |
2.6903 | 38000 | 0.569 | 0.3206 | 0.3664 | 0.5602 | 0.4158 |
2.7611 | 39000 | 0.5723 | - | - | - | - |
2.8319 | 40000 | 0.5714 | 0.3206 | 0.3664 | 0.5606 | 0.4159 |
2.9027 | 41000 | 0.5621 | - | - | - | - |
2.9735 | 42000 | 0.5724 | 0.3206 | 0.3664 | 0.5602 | 0.4158 |
-1 | -1 | - | 0.3043 | 0.3813 | 0.5626 | 0.4161 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.13
- Sentence Transformers: 5.0.0
- Transformers: 4.53.1
- PyTorch: 2.7.1+cu128
- Accelerate: 1.8.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CachedMultipleNegativesRankingLoss
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
- Downloads last month
- 2
Datasets used to train johnnyboycurtis/ModernBERT-small-retrieval
Evaluation results
- Cosine Accuracy@1 on NanoMSMARCOself-reported0.120
- Cosine Accuracy@3 on NanoMSMARCOself-reported0.260
- Cosine Accuracy@5 on NanoMSMARCOself-reported0.340
- Cosine Accuracy@10 on NanoMSMARCOself-reported0.540
- Cosine Precision@1 on NanoMSMARCOself-reported0.120
- Cosine Precision@3 on NanoMSMARCOself-reported0.087
- Cosine Precision@5 on NanoMSMARCOself-reported0.068
- Cosine Precision@10 on NanoMSMARCOself-reported0.054
- Cosine Recall@1 on NanoMSMARCOself-reported0.120
- Cosine Recall@3 on NanoMSMARCOself-reported0.260