YAML Metadata
Warning:
The pipeline tag "text-ranking" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, other
CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
This is a Cross Encoder model finetuned from microsoft/MiniLM-L12-H384-uncased on the ms_marco dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
Model Details
Model Description
- Model Type: Cross Encoder
- Base model: microsoft/MiniLM-L12-H384-uncased
- Maximum Sequence Length: 512 tokens
- Number of Output Labels: 1 label
- Training Dataset:
- Language: en
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Cross Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Cross Encoders on Hugging Face
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-minmax")
# Get scores for pairs of texts
pairs = [
['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],
['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],
['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (3,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'How many calories in an egg',
[
'There are on average between 55 and 80 calories in an egg depending on its size.',
'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',
'Most of the calories in an egg come from the yellow yolk in the center.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
Evaluation
Metrics
Cross Encoder Reranking
- Datasets:
NanoMSMARCO_R100
,NanoNFCorpus_R100
andNanoNQ_R100
- Evaluated with
CrossEncoderRerankingEvaluator
with these parameters:{ "at_k": 10, "always_rerank_positives": true }
Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
---|---|---|---|
map | 0.5062 (+0.0166) | 0.3309 (+0.0699) | 0.5849 (+0.1653) |
mrr@10 | 0.4946 (+0.0171) | 0.5983 (+0.0985) | 0.5876 (+0.1609) |
ndcg@10 | 0.5634 (+0.0229) | 0.3493 (+0.0243) | 0.6396 (+0.1390) |
Cross Encoder Nano BEIR
- Dataset:
NanoBEIR_R100_mean
- Evaluated with
CrossEncoderNanoBEIREvaluator
with these parameters:{ "dataset_names": [ "msmarco", "nfcorpus", "nq" ], "rerank_k": 100, "at_k": 10, "always_rerank_positives": true }
Metric | Value |
---|---|
map | 0.4740 (+0.0839) |
mrr@10 | 0.5602 (+0.0921) |
ndcg@10 | 0.5174 (+0.0621) |
Training Details
Training Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 78,704 training samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 11 characters
- mean: 33.66 characters
- max: 97 characters
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- min: 2 elements
- mean: 6.00 elements
- max: 10 elements
- Samples:
query docs labels what year did the us acquire land from the miami indians
['By 1846, most of the Miami had been removed to Indian Territory (now Oklahoma). The Miami Tribe of Oklahoma is the only federally recognized tribe of Miami Indians in the United States. The Miami Nation of Indiana is an unrecognized tribe. The Miami of Kekionga remained allies of the British, but were not openly hostile to the United States (US) (except when attacked by Augustin de La Balme in 1780). The U.S. government did not trust their neutrality, however.', 'In June 1816, a constitutional convention was held and a state government was formed. The territory was dissolved on December 11, 1816, by an act of Congress granting statehood to Indiana. In February 1815, the United States House of Representatives began debate on granting Indiana Territory statehood. In early 1816, the Territory approved a census and Pennington was named to be the census enumerator.', 'Stuart Banner, a law professor, does not deny that between the early 17th century and the end of the 19th, nearly the enti...
[1, 1, 0, 0, 0, ...]
what is a business director
['Intel Board of Directors. A director is a person from a group of managers who leads or supervises a particular area of a company, program, or project. Companies that use this term often have many directors spread throughout different business functions or roles (e.g. director of human resources). The director usually reports directly to a vice president or to the CEO directly in order to let them know the progress of the organization. An executive director within a company or an organization is usually from the board of directors and oversees a specific department within the organization such as Marketing, Finance, Production and IT.', 'company director. An appointed or elected member of the board of directors of a company who, with other directors, has the responsibility for determining and implementing the company’s policy.', 'Microsoft Outlook 2013 with Business Contact Manager is a great customer relationship management (CRM) tool for small business owners because they can use it...
[1, 0, 0, 0, 0, ...]
why is the thyroid gland important
['The thyroid is a small, butterfly-shaped gland located at the base of your neck. It is one of many glands in the endocrine system in the body that regulate the function, growth and development of virtually every cell, tissue and organ in the body. Endocrine glands secrete hormones directly into the bloodstream.', 'Thyroid dysfunction is when the thyroid gland, a small, butterfly-shaped gland located at the base of your neck, produces too much thyroid hormone. This is when you body’s endocrine system speed up, which is referred to as hyperthyroidism.', 'Thyroxine is the most important hormone produced by the thyroid gland. When the gland produces little or too much of this hormone, the body system faces major challenges. For example, if the thyroid is under-active, this could result in Goitre, which is a swelling at the neck.', 'The anterior pituitary makes several important hormones-growth hormone, puberty hormones (or gonadotrophins), thyroid stimulating hormone (TSH, which stimulat...
[1, 0, 0, 0, 0, ...]
- Loss:
PListMLELoss
with these parameters:{ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": null, "respect_input_order": true }
Evaluation Dataset
ms_marco
- Dataset: ms_marco at a47ee7a
- Size: 1,000 evaluation samples
- Columns:
query
,docs
, andlabels
- Approximate statistics based on the first 1000 samples:
query docs labels type string list list details - min: 11 characters
- mean: 33.08 characters
- max: 94 characters
- min: 1 elements
- mean: 5.50 elements
- max: 10 elements
- min: 1 elements
- mean: 5.50 elements
- max: 10 elements
- Samples:
query docs labels how many assistants does michelle have
['Never in the history of the White House has a First Lady spent so much on so many personal assistants, all paid from taxpayer dollars. Hilary Clinton had three (3)! Michelle has 26, from makeup artist Ingrid Miles and hairstylist Johnny Wright to her “chief of staff” Susan Sher whose salary is $172,200.00!', 'Allegations that Michelle Obama has an excessively large staff compared to other first ladies is nothing new. In 2009, FactCheck.org and Snopes.com debunked the claim circulated in a chain e-mail that Michelle Obama had an unprecedented number of staffers, with 22.', "Of course since Michelle Obama's Twitter account tweeted today to announce that it couldn't Tweet, the situation probably won't become urgent until Thursday. But we finally have an answer to the question: How many assistants does it take to Tweet a link for Michelle Obama. The answer: 16. You filthy Republicans.", "Myra Gutin, an expert on first ladies and politics at Rider University in New Jersey, said that as of...
[1, 0, 0, 0, 0, ...]
how long and at what temperature to bake salmon
["Oven Temperature. Another thing that determines how long the salmon is baked is oven temperature. Typically, recipes for baking salmon call for an oven temperature of 350 to 450 degrees. The salmon should always be put into a pre-heated oven. Cooking in an oven that hasn't been pre-heated can cause drying of the fish. 1 A two-inch thick fillet will bake for 20 minutes. 2 A 1-1/2 filet will take 15 minutes and so on. 3 Check the salmon frequently. 4 Start checking at about 10 minutes, and keep checking until the flesh of the fish is just barely an opaque pink.", 'Preheat the oven to 450 degrees F. Season salmon with salt and pepper. Place salmon, skin side down, on a non-stick baking sheet or in a non-stick pan with an oven-proof handle. Bake until salmon is cooked through, about 12 to 15 minutes. Serve with the Toasted Almond Parsley Salad and squash, if desired. Mince the shallot and add to a small bowl.', 'Report Abuse. I preheat the oven to 500 degrees, really hot. Put the salm...
[1, 0, 0, 0, 0, ...]
what is gene deletion
["Deletion on a chromosome. In genetics, a deletion (also called gene deletion, deficiency, or deletion mutation) (sign: δ) is a mutation (a genetic aberration) in which a part of a chromosome or a sequence of DNA is lost during DNA replication. Any number of nucleotides can be deleted, from a single base to an entire piece of chromosome. The smallest single base deletion mutations are believed occur by a single base flipping in the template DNA, followed by template DNA strand slippage, within the DNA polymerase active site. 1 ' Terminal Deletion' — a deletion that occurs towards the end of a chromosome. 2 Intercalary Deletion / Interstitial Deletion — a deletion that occurs from the interior of a chromosome. 3 Microdeletion — a relatively small amount of deletion (up to 5Mb that could include a dozen genes).", '22q11.2 deletion syndrome (which is also known by several other names, listed below) is a disorder caused by the deletion of a small piece of chromosome 22. The deletion occ...
[1, 0, 0, 0, 0, ...]
- Loss:
PListMLELoss
with these parameters:{ "lambda_weight": "sentence_transformers.cross_encoder.losses.PListMLELoss.PListMLELambdaWeight", "activation_fct": "torch.nn.modules.linear.Identity", "mini_batch_size": null, "respect_input_order": true }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1seed
: 12bf16
: Trueload_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
---|---|---|---|---|---|---|---|
-1 | -1 | - | - | 0.0224 (-0.5181) | 0.2459 (-0.0791) | 0.0785 (-0.4221) | 0.1156 (-0.3398) |
0.0002 | 1 | 2.2034 | - | - | - | - | - |
0.0508 | 250 | 2.1047 | - | - | - | - | - |
0.1016 | 500 | 1.9773 | 1.9326 | 0.1454 (-0.3951) | 0.2533 (-0.0717) | 0.2214 (-0.2793) | 0.2067 (-0.2487) |
0.1525 | 750 | 1.911 | - | - | - | - | - |
0.2033 | 1000 | 1.8706 | 1.8490 | 0.4764 (-0.0640) | 0.3298 (+0.0048) | 0.5301 (+0.0295) | 0.4455 (-0.0099) |
0.2541 | 1250 | 1.8645 | - | - | - | - | - |
0.3049 | 1500 | 1.857 | 1.8414 | 0.5404 (-0.0001) | 0.3443 (+0.0192) | 0.6513 (+0.1506) | 0.5120 (+0.0566) |
0.3558 | 1750 | 1.8524 | - | - | - | - | - |
0.4066 | 2000 | 1.841 | 1.8224 | 0.5780 (+0.0375) | 0.3498 (+0.0247) | 0.6080 (+0.1074) | 0.5119 (+0.0565) |
0.4574 | 2250 | 1.8239 | - | - | - | - | - |
0.5082 | 2500 | 1.8221 | 1.8216 | 0.5538 (+0.0134) | 0.3481 (+0.0230) | 0.6245 (+0.1238) | 0.5088 (+0.0534) |
0.5591 | 2750 | 1.8238 | - | - | - | - | - |
0.6099 | 3000 | 1.8377 | 1.8066 | 0.5280 (-0.0124) | 0.3363 (+0.0113) | 0.5669 (+0.0663) | 0.4771 (+0.0217) |
0.6607 | 3250 | 1.8357 | - | - | - | - | - |
0.7115 | 3500 | 1.8221 | 1.8041 | 0.5424 (+0.0020) | 0.3481 (+0.0230) | 0.5980 (+0.0973) | 0.4962 (+0.0408) |
0.7624 | 3750 | 1.8245 | - | - | - | - | - |
0.8132 | 4000 | 1.8287 | 1.8026 | 0.5627 (+0.0223) | 0.3564 (+0.0314) | 0.6185 (+0.1178) | 0.5125 (+0.0572) |
0.8640 | 4250 | 1.8125 | - | - | - | - | - |
0.9148 | 4500 | 1.8198 | 1.803 | 0.5634 (+0.0229) | 0.3493 (+0.0243) | 0.6396 (+0.1390) | 0.5174 (+0.0621) |
0.9656 | 4750 | 1.8193 | - | - | - | - | - |
-1 | -1 | - | - | 0.5634 (+0.0229) | 0.3493 (+0.0243) | 0.6396 (+0.1390) | 0.5174 (+0.0621) |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.4.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
PListMLELoss
@inproceedings{lan2014position,
title={Position-Aware ListMLE: A Sequential Learning Process for Ranking.},
author={Lan, Yanyan and Zhu, Yadong and Guo, Jiafeng and Niu, Shuzi and Cheng, Xueqi},
booktitle={UAI},
volume={14},
pages={449--458},
year={2014}
}
- Downloads last month
- 8
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The HF Inference API does not support text-ranking models for sentence-transformers
library.
Model tree for yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-minmax
Base model
microsoft/MiniLM-L12-H384-uncasedDataset used to train yjoonjang/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle-normalize-minmax
Evaluation results
- Map on NanoMSMARCO R100self-reported0.506
- Mrr@10 on NanoMSMARCO R100self-reported0.495
- Ndcg@10 on NanoMSMARCO R100self-reported0.563
- Map on NanoNFCorpus R100self-reported0.331
- Mrr@10 on NanoNFCorpus R100self-reported0.598
- Ndcg@10 on NanoNFCorpus R100self-reported0.349
- Map on NanoNQ R100self-reported0.585
- Mrr@10 on NanoNQ R100self-reported0.588
- Ndcg@10 on NanoNQ R100self-reported0.640
- Map on NanoBEIR R100 meanself-reported0.474