ayushexel's picture
Add new SentenceTransformer model
d22b597 verified
metadata
tags:
  - ColBERT
  - PyLate
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:1867662
  - loss:Contrastive
base_model: answerdotai/ModernBERT-base
pipeline_tag: sentence-similarity
library_name: PyLate
metrics:
  - accuracy
model-index:
  - name: PyLate model based on answerdotai/ModernBERT-base
    results:
      - task:
          type: col-berttriplet
          name: Col BERTTriplet
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: accuracy
            value: 0.45879998803138733
            name: Accuracy

PyLate model based on answerdotai/ModernBERT-base

This is a PyLate model finetuned from answerdotai/ModernBERT-base. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.

Model Details

Model Description

  • Model Type: PyLate model
  • Base model: answerdotai/ModernBERT-base
  • Document Length: 180 tokens
  • Query Length: 32 tokens
  • Output Dimensionality: 128 tokens
  • Similarity Function: MaxSim

Model Sources

Full Model Architecture

ColBERT(
  (0): Transformer({'max_seq_length': 31, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)

Usage

First install the PyLate library:

pip install -U pylate

Retrieval

PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.

Indexing documents

First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:

from pylate import indexes, models, retrieve

# Step 1: Load the ColBERT model
model = models.ColBERT(
    model_name_or_path=ayushexel/colbert-ModernBERT-base-1-neg-5-epoch-gooaq-1995000,
)

# Step 2: Initialize the Voyager index
index = indexes.Voyager(
    index_folder="pylate-index",
    index_name="index",
    override=True,  # This overwrites the existing index if any
)

# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]

documents_embeddings = model.encode(
    documents,
    batch_size=32,
    is_query=False,  # Ensure that it is set to False to indicate that these are documents, not queries
    show_progress_bar=True,
)

# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
    documents_ids=documents_ids,
    documents_embeddings=documents_embeddings,
)

Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:

# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
    index_folder="pylate-index",
    index_name="index",
)

Retrieving top-k documents for queries

Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries. To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:

# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)

# Step 2: Encode the queries
queries_embeddings = model.encode(
    ["query for document 3", "query for document 1"],
    batch_size=32,
    is_query=True,  #  # Ensure that it is set to False to indicate that these are queries
    show_progress_bar=True,
)

# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
    queries_embeddings=queries_embeddings,
    k=10,  # Retrieve the top 10 matches for each query
)

Reranking

If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:

from pylate import rank, models

queries = [
    "query A",
    "query B",
]

documents = [
    ["document A", "document B"],
    ["document 1", "document C", "document B"],
]

documents_ids = [
    [1, 2],
    [1, 3, 2],
]

model = models.ColBERT(
    model_name_or_path=ayushexel/colbert-ModernBERT-base-1-neg-5-epoch-gooaq-1995000,
)

queries_embeddings = model.encode(
    queries,
    is_query=True,
)

documents_embeddings = model.encode(
    documents,
    is_query=False,
)

reranked_documents = rank.rerank(
    documents_ids=documents_ids,
    queries_embeddings=queries_embeddings,
    documents_embeddings=documents_embeddings,
)

Evaluation

Metrics

Col BERTTriplet

  • Evaluated with pylate.evaluation.colbert_triplet.ColBERTTripletEvaluator
Metric Value
accuracy 0.4588

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,867,662 training samples
  • Columns: question, answer, and negative
  • Approximate statistics based on the first 1000 samples:
    question answer negative
    type string string string
    details
    • min: 9 tokens
    • mean: 13.12 tokens
    • max: 22 tokens
    • min: 16 tokens
    • mean: 31.73 tokens
    • max: 32 tokens
    • min: 15 tokens
    • mean: 31.64 tokens
    • max: 32 tokens
  • Samples:
    question answer negative
    are mandarins same as clementines? Mandarins… When it comes to Clementines vs. Mandarins, the Mandarin is the master orange of the family, and Clementines, tangerines, and satsumas all fall under this umbrella. A: CUTIES® are actually two varieties of mandarins: Clementine mandarins, available November through January; and W. Murcott mandarins, available February through April. ... Unlike other mandarins or oranges, they are seedless, super sweet, easy to peel and kid-sized—only a select few achieve CUTIES® ' high standards.
    why are snow leopards no longer endangered? The snow leopard is no longer an endangered species, but its population in the wild is still at risk because of poaching and habitat loss, conservationists said this week. ... Conservationists warned that the risks are not over for the snow leopards, whose distinctive appearances make them attractive to poachers. The term "big cat" is typically used to refer to any of the five living members of the genus Panthera, namely tiger, lion, jaguar, leopard, and snow leopard. Except the snow leopard, these species are able to roar.
    are waves measured from the front or back? In scientific terms and most used by the surfing community around the world, the wave height is measured vertically from the trough to the crest and is known by surfers as face scale. In Hawaii, local surfers use the back of the wave to measure wave height and is called Hawaiian scale or local scale. Wavelength is the distance between sound waves while frequency is the number of times in which the sound wave occurs. 2. Wavelength is used to measure the length of sound waves while frequency is used to measure the recurrence of sound waves.
  • Loss: pylate.losses.contrastive.Contrastive

Evaluation Dataset

Unnamed Dataset

  • Size: 5,000 evaluation samples
  • Columns: question, answer, and negative_1
  • Approximate statistics based on the first 1000 samples:
    question answer negative_1
    type string string string
    details
    • min: 9 tokens
    • mean: 13.02 tokens
    • max: 25 tokens
    • min: 16 tokens
    • mean: 31.66 tokens
    • max: 32 tokens
    • min: 15 tokens
    • mean: 31.41 tokens
    • max: 32 tokens
  • Samples:
    question answer negative_1
    what is the best shampoo for thin curly hair? ['Best For Daily Cleansing: Mizani True Textures Cream Cleansing Conditioner. ... ', 'Best For Coils: Ouidad VitalCurl Clear & Gentle Shampoo. ... ', 'Best For Restoring Shine: Shea Moisture Coconut & Hibiscus Curl & Shine Shampoo. ... ', 'Best For Fine Curls: Renee Furterer Sublime Curl Curl Activating Shampoo.'] Whether you have straight or curly hair, thin or thick, this is another option that you should not miss for the best OGX shampoo. The Australian tea tree oils in this shampoo are effective for repair of oily, damaged, and frizzy hair. ... It also makes a great choice of shampoo for people who have dry scalp.
    how many days after my period do i start ovulating? Many women typically ovulate around 12 to 14 days after the first day of their last period, but some have a naturally short cycle. They may ovulate as soon as six days or so after the first day of their last period. If you have a short cycle, for example, 21 days, and you bleed for 7 days, then you could ovulate right after your period. This is because ovulation generally occurs 12-16 days before your next period begins, and this would estimate you ovulating at days 6-10 of your cycle.
    are the apes in planet of the apes cgi? Unlike in the original 1968 film, there are no monkey suits, heavy makeup jobs or wigs. All of the apes audiences see on-screen are motion-capture CGI apes, which lends them a more realistic effect as the CGI is based on the actors' actual movements. Among the living primates, humans are most closely related to the apes, which include the lesser apes (gibbons) and the great apes (chimpanzees, gorillas and orangutans).
  • Loss: pylate.losses.contrastive.Contrastive

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 180
  • per_device_eval_batch_size: 180
  • learning_rate: 3e-06
  • num_train_epochs: 5
  • warmup_ratio: 0.1
  • seed: 12
  • bf16: True
  • dataloader_num_workers: 12
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 180
  • per_device_eval_batch_size: 180
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 3e-06
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 12
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: True
  • dataloader_num_workers: 12
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss accuracy
0 0 - 0.4588
0.0004 1 19.095 -
0.0771 200 12.1999 -
0.1543 400 5.3887 -
0.2314 600 1.9588 -
0.3085 800 1.0338 -
0.3857 1000 0.8202 -
0.4628 1200 0.6943 -
0.5399 1400 0.6087 -
0.6170 1600 0.5484 -
0.6942 1800 0.5049 -
0.7713 2000 0.4754 -
0.8484 2200 0.4497 -
0.9256 2400 0.4298 -
1.0027 2600 0.4102 -
1.0798 2800 0.3861 -
1.1570 3000 0.3727 -
1.2341 3200 0.3632 -
1.3112 3400 0.3507 -
1.3884 3600 0.3431 -
1.4655 3800 0.3345 -
1.5426 4000 0.3282 -
1.6197 4200 0.3232 -
1.6969 4400 0.3148 -
1.7740 4600 0.3053 -
1.8511 4800 0.3003 -
1.9283 5000 0.2981 -
2.0054 5200 0.2913 -
2.0825 5400 0.2787 -
2.1597 5600 0.2764 -
2.2368 5800 0.2742 -
2.3139 6000 0.2693 -
2.3911 6200 0.2698 -
2.4682 6400 0.2635 -
2.5453 6600 0.2586 -
2.6224 6800 0.2577 -
2.6996 7000 0.257 -
2.7767 7200 0.2541 -
2.8538 7400 0.2539 -
2.9310 7600 0.25 -
3.0081 7800 0.2498 -
3.0852 8000 0.2377 -
3.1624 8200 0.2388 -
3.2395 8400 0.2377 -
3.3166 8600 0.2358 -
3.3938 8800 0.2363 -
3.4709 9000 0.2335 -
3.5480 9200 0.2329 -
3.6251 9400 0.2301 -
3.7023 9600 0.2334 -
3.7794 9800 0.2301 -
3.8565 10000 0.2309 -
3.9337 10200 0.2291 -
4.0108 10400 0.2268 -
4.0879 10600 0.2212 -
4.1651 10800 0.2224 -
4.2422 11000 0.2224 -
4.3193 11200 0.2211 -
4.3965 11400 0.2194 -
4.4736 11600 0.2192 -
4.5507 11800 0.2183 -
4.6278 12000 0.222 -
4.7050 12200 0.2199 -
4.7821 12400 0.22 -
4.8592 12600 0.2201 -
4.9364 12800 0.2198 -

Framework Versions

  • Python: 3.11.0
  • Sentence Transformers: 4.0.1
  • PyLate: 1.1.7
  • Transformers: 4.48.2
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084"
}

PyLate

@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}