YAML Metadata
Warning:
The pipeline tag "text-ranking" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, other
CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased on the ms_marco dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: microsoft/MiniLM-L12-H384-uncased
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
- Training Dataset:
- Language: en
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Reranking
- Datasets:
NanoMSMARCO_R100
,NanoNFCorpus_R100
andNanoNQ_R100
- Evaluated with
CrossEncoderRerankingEvaluator
with these parameters:{ "at_k": 10, "always_rerank_positives": true }
Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
---|---|---|---|
map | 0.4975 (+0.0079) | 0.3252 (+0.0642) | 0.5857 (+0.1661) |
mrr@10 | 0.4843 (+0.0068) | 0.5679 (+0.0681) | 0.5922 (+0.1655) |
ndcg@10 | 0.5506 (+0.0102) | 0.3756 (+0.0505) | 0.6570 (+0.1563) |
Cross Encoder Nano BEIR
- Dataset:
NanoBEIR_R100_mean
- Evaluated with
CrossEncoderNanoBEIREvaluator
with these parameters:{ "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "rerank_k": 100, "at_k": 10, "always_rerank_positives": true }
Metric | Value |
---|---|
map | 0.4695 (+0.0794) |
mrr@10 | 0.5481 (+0.0801) |
ndcg@10 | 0.5277 (+0.0724) |
Training Details
Training Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 78,704 training samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 9 characters
- mean: 33.73 characters
- max: 119 characters
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- Samples:
query docs labels what to avoid during early pregnancy
['Although caffeine does not come under the category of foods to avoid during early pregnancy, pregnant women are advised to limit their caffeine consumption. Caffeine can be found in tea, coffee, soft drinks, chocolate etc.', 'Learn what foods to eat and what to avoid during pregnancy to ensure a healthy environment for your unborn baby! As a concerned parent, you want to do everything possible to ensure the well being and safety of your baby.', 'To stay safe, also avoid these foods during your pregnancy. Meats. 1 Cold cuts, deli meats, hot dogs, and other ready-to-eat meats. ( 2 You can safely eat these if they are heated to steaming and served hot.). 3 Pre-stuffed, fresh, turkey or chicken. 4 Steak tartare or any raw meat. 5 Rare cuts of meat and undercooked meats.', 'Raw and undercooked meat is another among foods to avoid during first trimester. Make sure that the meat is well cooked and consume while it is still hot. It would be good to avoid processed meat since pregnant wom...
[1, 0, 0, 0, 0, ...]
where is bells creek
['Simpsonville, SC Real Estate. #facebook# Bells Creek is a small neighborhood in Simpsonville, SC, close to Bells Crossing Elementary. Located near Woodruff Rd., I-85 and I-385, the upscale Bells Creek homes are typically on large lots with mature trees. Bells Creek amenities include a pool and cabana. Bells Creek real estate prices average $220,000.', '#facebook# Bells Creek is a small neighborhood in Simpsonville, SC, close to Bells Crossing Elementary. Located near Woodruff Rd., I-85 and I-385, the upscale Bells Creek homes are typically on large lots with mature trees. Bells Creek amenities include a pool and cabana. Bells Creek real estate prices average $220,000.', "Welcome to The Overlook at Bells Creek, an exclusive Eastwood Homes' Greenville area community only minutes away from Five Forks in Simpsonville, SC.", 'Property Details. Property details for 213 Bells Creek Dr, Simpsonville, SC 29681. This Single Family Home is located at Bells Creek in Simpsonville, South Carolina. The home provides approximately 2074 square feet of living space. This property features 4 bedrooms. There are 3 bathrooms. 213 Bells Creek Dr, Simpsonville, SC 29681 falls within the Greenville county lines. This home sold for $180,000 on Dec 17, 2014. Similar homes in the area are priced around $187,091.']
[1, 1, 0, 0]
how long does it take to hatch geese eggs in an incubator
['Geese take 31 days of incubation for a goose egg to hatch. Whether underneath its parents, or in an incubator, the incubation time is the same.', 'While chicken eggs take 21 days, for example, geese can take between 30 and 35 days and need a higher humidity level. Goslings are also more likely to hatch if the eggs are sprayed with water every day between days 6 and about 25, whereas chicken eggs need to be kept in humid conditions, but dry.', 'Incubation Duration. Incubating goose eggs should be done for a period of about 28 days for smaller breeds, and up to 35 days for larger breeds before pipping begins. Once goose eggs begin hatching, the process can take up to three days before they are completely out of their shell.', "It takes 21 days to incubate the egg. the 21st day is the hatching day. if the eggs are mail order don't count the day the eggs arrive if the temperature is below 55 degrees f … . on the 21st day the eggs will hatch at all different times.", '5. Wait until the eg...
[1, 0, 0, 0, 0, ...]
- Loss:
ListMLELoss
with these parameters:{ "lambda_weight": "sentence_transformers.cross_encoder.losses.ListMLELoss.ListMLELambdaWeight", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": 16, "respect_input_order": true }
Evaluation Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 1,000 evaluation samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 12 characters
- mean: 33.41 characters
- max: 94 characters
- min: 3 elements
- mean: 6.50 elements
- max: 10 elements
- min: 3 elements
- mean: 6.50 elements
- max: 10 elements
- Samples:
query docs labels who does glenn beck want for speaker of the house
["
. Nebraska Senator Ben Sasse spoke to Glenn Beck Wednesday about Sasse's idea to make Arthur Brooks, head of the American Enterprise Institute, the next Speaker of the House. There’s nothing in the Constitution that requires you to have a Speaker who is an elected member of Congress, Sasse said.", 'And the guy that kept coming to my mind as I was watching Sunday Night Football is Arthur Brooks, the head of AEI. And so I think that the House Republicans should think about going outside the box. There’s nothing in the Constitution that requires you to have a Speaker who is an elected member of Congress.', 'Share on Facebook Share on Twitter. Conservative radio host Glenn Beck went off on Republican leadership after the House passed a budget compromise Thursday, calling House Speaker John Boehner “worthless” and Senate Minority Leader Mitch McConnell a “liar.”.', '“I think John Boehner is one of the prime examples of worthless, worthless Republicans,” Beck said Thursday on Mark Levin’s... how long do you have to keep former employee records
['Employee Contracts. If you have employment contracts with your employees, then you should maintain these contracts for at least 10 years. According to Financial Web, you should “err on the side of caution” and maintain employee records for a longer period than you think you may need, in case a legal issue arises.', 'Small business expert Rieva Lesonsky suggests you keep employment records for a minimum of two years and for up to seven years. She says that most states have a two-year statute of limitations on lawsuit filings by former employees, so you want to make sure you have documents at hand if this occurs.', 'Since employees may come and go, you may wonder how long you should hang on to the employee records. The Internal Revenue Service (IRS) weighs in on records pertaining to employee taxes, such as payroll, but the other records depend on what types of records you have for employees.', 'Effective January 1, 2013, California law provides that current and former employees (or a ...
[1, 0, 0, 0, 0, ...]
what year was velcro invented
['Velcro was invented by George de Mestral a Swiss electrical engineer in 1941. This idea of inventing Velcro came to him when one day he returned after a walk from the hills and found cockleburs stuck to his clothes and his dog’s fur. George noticed its natural hook and loop quality and started making a fabric fastener on the same quality.', 'Velcro, which was invented by a Swiss Electrical Engineer George de Mestral, comprises two layers and when both these sides are hard-pressed together, they assist in fixing two surfaces. The thought to invent Velcro hits Mestral’s mind in the year 1941 after coming back from a hunting tour with his dog. ', 'In 1958, de Mestral filed for a patent application for his hook-and-loop fastener in Switzerland, which was granted in 1961. The term Velcro is a registered trademark of Velcro Industries B.V. Velcro Industries is a privately held worldwide corporation manufacturing consumer and industrial products. Among them is a series of mechanical-based f...
[1, 0, 0, 0, 0, ...]
- Loss:
ListMLELoss
with these parameters:{ "lambda_weight": "sentence_transformers.cross_encoder.losses.ListMLELoss.ListMLELambdaWeight", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": 16, "respect_input_order": true }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1seed
: 12bf16
: Trueload_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
---|---|---|---|---|---|---|---|
-1 | -1 | - | - | 0.0344 (-0.5060) | 0.2073 (-0.1178) | 0.0336 (-0.4671) | 0.0918 (-0.3636) |
0.0002 | 1 | 1412.3083 | - | - | - | - | - |
0.0508 | 250 | 887.7485 | - | - | - | - | - |
0.1016 | 500 | 853.8898 | 903.5635 | 0.2242 (-0.3163) | 0.2467 (-0.0783) | 0.3585 (-0.1421) | 0.2765 (-0.1789) |
0.1525 | 750 | 867.3723 | - | - | - | - | - |
0.2033 | 1000 | 851.3223 | 880.1996 | 0.4790 (-0.0614) | 0.3435 (+0.0184) | 0.5945 (+0.0938) | 0.4723 (+0.0170) |
0.2541 | 1250 | 840.5654 | - | - | - | - | - |
0.3049 | 1500 | 836.1076 | 872.8075 | 0.5189 (-0.0216) | 0.3394 (+0.0143) | 0.6097 (+0.1091) | 0.4893 (+0.0339) |
0.3558 | 1750 | 853.3524 | - | - | - | - | - |
0.4066 | 2000 | 859.1896 | 872.7851 | 0.5453 (+0.0049) | 0.3638 (+0.0387) | 0.6322 (+0.1315) | 0.5137 (+0.0584) |
0.4574 | 2250 | 816.2849 | - | - | - | - | - |
0.5082 | 2500 | 832.0728 | 866.5376 | 0.5428 (+0.0023) | 0.3737 (+0.0487) | 0.6384 (+0.1378) | 0.5183 (+0.0629) |
0.5591 | 2750 | 825.9285 | - | - | - | - | - |
0.6099 | 3000 | 809.4326 | 865.0468 | 0.5319 (-0.0085) | 0.3488 (+0.0238) | 0.6320 (+0.1313) | 0.5042 (+0.0489) |
0.6607 | 3250 | 807.3669 | - | - | - | - | - |
0.7115 | 3500 | 828.0153 | 869.0601 | 0.5479 (+0.0075) | 0.3690 (+0.0440) | 0.6495 (+0.1488) | 0.5221 (+0.0668) |
0.7624 | 3750 | 841.2574 | - | - | - | - | - |
0.8132 | 4000 | 814.0583 | 865.1564 | 0.5406 (+0.0001) | 0.3571 (+0.0320) | 0.6519 (+0.1513) | 0.5165 (+0.0612) |
0.8640 | 4250 | 814.6952 | - | - | - | - | - |
0.9148 | 4500 | 825.9762 | 864.4775 | 0.5506 (+0.0102) | 0.3756 (+0.0505) | 0.6570 (+0.1563) | 0.5277 (+0.0724) |
0.9656 | 4750 | 821.2723 | - | - | - | - | - |
-1 | -1 | - | - | 0.5506 (+0.0102) | 0.3756 (+0.0505) | 0.6570 (+0.1563) | 0.5277 (+0.0724) |
- The bold row denotes the saved checkpoint.
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Energy Consumed: 0.244 kWh
- Carbon Emitted: 0.095 kg of CO2
- Hours Used: 0.917 hours
Training Hardware
- On Cloud: No
- GPU Model: 1 x NVIDIA GeForce RTX 3090
- CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
- RAM Size: 31.78 GB
Framework Versions
- Python: 3.11.6
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
ListMLELoss
@inproceedings{lan2013position,
title={Position-aware ListMLE: a sequential learning process for ranking},
author={Lan, Yanyan and Guo, Jiafeng and Cheng, Xueqi and Liu, Tie-Yan},
booktitle={Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence},
pages={333--342},
year={2013}
}
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The HF Inference API does not support text-ranking models for sentence-transformers
library.
Model tree for tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle
Base model
microsoft/MiniLM-L12-H384-uncasedDataset used to train tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle
Evaluation results
- Map on NanoMSMARCO R100self-reported0.497
- Mrr@10 on NanoMSMARCO R100self-reported0.484
- Ndcg@10 on NanoMSMARCO R100self-reported0.551
- Map on NanoNFCorpus R100self-reported0.325
- Mrr@10 on NanoNFCorpus R100self-reported0.568
- Ndcg@10 on NanoNFCorpus R100self-reported0.376
- Map on NanoNQ R100self-reported0.586
- Mrr@10 on NanoNQ R100self-reported0.592
- Ndcg@10 on NanoNQ R100self-reported0.657
- Map on NanoBEIR R100 meanself-reported0.469