PyLate model based on answerdotai/ModernBERT-base
This is a PyLate model finetuned from answerdotai/ModernBERT-base. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
Model Details
Model Description
- Model Type: PyLate model
- Base model: answerdotai/ModernBERT-base
- Document Length: 180 tokens
- Query Length: 32 tokens
- Output Dimensionality: 128 tokens
- Similarity Function: MaxSim
Model Sources
- Documentation: PyLate Documentation
- Repository: PyLate on GitHub
- Hugging Face: PyLate models on Hugging Face
Full Model Architecture
ColBERT(
(0): Transformer({'max_seq_length': 31, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
Usage
First install the PyLate library:
pip install -U pylate
Retrieval
PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.
Indexing documents
First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path=ayushexel/colbert-ModernBERT-base-1-neg-5-epoch-gooaq-1995000,
)
# Step 2: Initialize the Voyager index
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
)
Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path=ayushexel/colbert-ModernBERT-base-1-neg-5-epoch-gooaq-1995000,
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
Evaluation
Metrics
Col BERTTriplet
- Evaluated with
pylate.evaluation.colbert_triplet.ColBERTTripletEvaluator
Metric | Value |
---|---|
accuracy | 0.4588 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 1,867,662 training samples
- Columns:
question
,answer
, andnegative
- Approximate statistics based on the first 1000 samples:
question answer negative type string string string details - min: 9 tokens
- mean: 13.12 tokens
- max: 22 tokens
- min: 16 tokens
- mean: 31.73 tokens
- max: 32 tokens
- min: 15 tokens
- mean: 31.64 tokens
- max: 32 tokens
- Samples:
question answer negative are mandarins same as clementines?
Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella.
A: CUTIES® are actually two varieties of mandarins: Clementine mandarins, available November through January; and W. Murcott mandarins, available February through April. ... Unlike other mandarins or oranges, they are seedless, super sweet, easy to peel and kid-sized—only a select few achieve CUTIES® ' high standards.
why are snow leopards no longer endangered?
The snow leopard is no longer an endangered species, but its population in the wild is still at risk because of poaching and habitat loss, conservationists said this week. ... Conservationists warned that the risks are not over for the snow leopards, whose distinctive appearances make them attractive to poachers.
The term "big cat" is typically used to refer to any of the five living members of the genus Panthera, namely tiger, lion, jaguar, leopard, and snow leopard. Except the snow leopard, these species are able to roar.
are waves measured from the front or back?
In scientific terms and most used by the surfing community around the world, the wave height is measured vertically from the trough to the crest and is known by surfers as face scale. In Hawaii, local surfers use the back of the wave to measure wave height and is called Hawaiian scale or local scale.
Wavelength is the distance between sound waves while frequency is the number of times in which the sound wave occurs. 2. Wavelength is used to measure the length of sound waves while frequency is used to measure the recurrence of sound waves.
- Loss:
pylate.losses.contrastive.Contrastive
Evaluation Dataset
Unnamed Dataset
- Size: 5,000 evaluation samples
- Columns:
question
,answer
, andnegative_1
- Approximate statistics based on the first 1000 samples:
question answer negative_1 type string string string details - min: 9 tokens
- mean: 13.02 tokens
- max: 25 tokens
- min: 16 tokens
- mean: 31.66 tokens
- max: 32 tokens
- min: 15 tokens
- mean: 31.41 tokens
- max: 32 tokens
- Samples:
question answer negative_1 what is the best shampoo for thin curly hair?
['Best For Daily Cleansing: Mizani True Textures Cream Cleansing Conditioner. ... ', 'Best For Coils: Ouidad VitalCurl Clear & Gentle Shampoo. ... ', 'Best For Restoring Shine: Shea Moisture Coconut & Hibiscus Curl & Shine Shampoo. ... ', 'Best For Fine Curls: Renee Furterer Sublime Curl Curl Activating Shampoo.']
Whether you have straight or curly hair, thin or thick, this is another option that you should not miss for the best OGX shampoo. The Australian tea tree oils in this shampoo are effective for repair of oily, damaged, and frizzy hair. ... It also makes a great choice of shampoo for people who have dry scalp.
how many days after my period do i start ovulating?
Many women typically ovulate around 12 to 14 days after the first day of their last period, but some have a naturally short cycle. They may ovulate as soon as six days or so after the first day of their last period.
If you have a short cycle, for example, 21 days, and you bleed for 7 days, then you could ovulate right after your period. This is because ovulation generally occurs 12-16 days before your next period begins, and this would estimate you ovulating at days 6-10 of your cycle.
are the apes in planet of the apes cgi?
Unlike in the original 1968 film, there are no monkey suits, heavy makeup jobs or wigs. All of the apes audiences see on-screen are motion-capture CGI apes, which lends them a more realistic effect as the CGI is based on the actors' actual movements.
Among the living primates, humans are most closely related to the apes, which include the lesser apes (gibbons) and the great apes (chimpanzees, gorillas and orangutans).
- Loss:
pylate.losses.contrastive.Contrastive
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 180per_device_eval_batch_size
: 180learning_rate
: 3e-06num_train_epochs
: 5warmup_ratio
: 0.1seed
: 12bf16
: Truedataloader_num_workers
: 12load_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 180per_device_eval_batch_size
: 180per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 3e-06weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 12data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Truedataloader_num_workers
: 12dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | accuracy |
---|---|---|---|
0 | 0 | - | 0.4588 |
0.0004 | 1 | 19.095 | - |
0.0771 | 200 | 12.1999 | - |
0.1543 | 400 | 5.3887 | - |
0.2314 | 600 | 1.9588 | - |
0.3085 | 800 | 1.0338 | - |
0.3857 | 1000 | 0.8202 | - |
0.4628 | 1200 | 0.6943 | - |
0.5399 | 1400 | 0.6087 | - |
0.6170 | 1600 | 0.5484 | - |
0.6942 | 1800 | 0.5049 | - |
0.7713 | 2000 | 0.4754 | - |
0.8484 | 2200 | 0.4497 | - |
0.9256 | 2400 | 0.4298 | - |
1.0027 | 2600 | 0.4102 | - |
1.0798 | 2800 | 0.3861 | - |
1.1570 | 3000 | 0.3727 | - |
1.2341 | 3200 | 0.3632 | - |
1.3112 | 3400 | 0.3507 | - |
1.3884 | 3600 | 0.3431 | - |
1.4655 | 3800 | 0.3345 | - |
1.5426 | 4000 | 0.3282 | - |
1.6197 | 4200 | 0.3232 | - |
1.6969 | 4400 | 0.3148 | - |
1.7740 | 4600 | 0.3053 | - |
1.8511 | 4800 | 0.3003 | - |
1.9283 | 5000 | 0.2981 | - |
2.0054 | 5200 | 0.2913 | - |
2.0825 | 5400 | 0.2787 | - |
2.1597 | 5600 | 0.2764 | - |
2.2368 | 5800 | 0.2742 | - |
2.3139 | 6000 | 0.2693 | - |
2.3911 | 6200 | 0.2698 | - |
2.4682 | 6400 | 0.2635 | - |
2.5453 | 6600 | 0.2586 | - |
2.6224 | 6800 | 0.2577 | - |
2.6996 | 7000 | 0.257 | - |
2.7767 | 7200 | 0.2541 | - |
2.8538 | 7400 | 0.2539 | - |
2.9310 | 7600 | 0.25 | - |
3.0081 | 7800 | 0.2498 | - |
3.0852 | 8000 | 0.2377 | - |
3.1624 | 8200 | 0.2388 | - |
3.2395 | 8400 | 0.2377 | - |
3.3166 | 8600 | 0.2358 | - |
3.3938 | 8800 | 0.2363 | - |
3.4709 | 9000 | 0.2335 | - |
3.5480 | 9200 | 0.2329 | - |
3.6251 | 9400 | 0.2301 | - |
3.7023 | 9600 | 0.2334 | - |
3.7794 | 9800 | 0.2301 | - |
3.8565 | 10000 | 0.2309 | - |
3.9337 | 10200 | 0.2291 | - |
4.0108 | 10400 | 0.2268 | - |
4.0879 | 10600 | 0.2212 | - |
4.1651 | 10800 | 0.2224 | - |
4.2422 | 11000 | 0.2224 | - |
4.3193 | 11200 | 0.2211 | - |
4.3965 | 11400 | 0.2194 | - |
4.4736 | 11600 | 0.2192 | - |
4.5507 | 11800 | 0.2183 | - |
4.6278 | 12000 | 0.222 | - |
4.7050 | 12200 | 0.2199 | - |
4.7821 | 12400 | 0.22 | - |
4.8592 | 12600 | 0.2201 | - |
4.9364 | 12800 | 0.2198 | - |
Framework Versions
- Python: 3.11.0
- Sentence Transformers: 4.0.1
- PyLate: 1.1.7
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
PyLate
@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
- Downloads last month
- 7
Model tree for ayushexel/colbert-ModernBERT-base-1-neg-5-epoch-gooaq-1995000
Base model
answerdotai/ModernBERT-base