SentenceTransformer based on Alibaba-NLP/gte-modernbert-base

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-modernbert-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-modernbert-base
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("anasse15/MNLP_M3_document_encoder")
# Run inference
sentences = [
    "What is the primary role of davemaoite in Earth's lower mantle?\nA. It is the most abundant mineral in the crust.\nB. It acts as a catalyst for mineral formation.\nC. It serves as a primary source of diamonds.\nD. It contributes to heat flow through radioactive decay.",
    "Davemaoite  is  a high-pressure calcium silicate perovskite (CaSiO3) mineral with a distinctive cubic crystal structure. It is named after geophysicist Ho-kwang (Dave) Mao, who pioneered in many discoveries in high-pressure geochemistry and geophysics.  \n\nIt is one of three main minerals in Earth’s lower mantle, making up around 5–7% of the material there. Significantly, davemaoite can host uranium and thorium, radioactive isotopes which produce heat through radioactive decay and contribute greatly to heating within this region giving the material a major role in how heat flows deep below the earth's surface.\n\nDavemaoite has been artificially synthesized in the laboratory, but was thought to be too extreme to exist in the Earth's crust. Then in 2021, the mineral was discovered as specks within a diamond that formed between 660 and 900 km beneath the Earth's surface, within the mantle. The diamond had been extracted from the Orapa diamond mine in Botswana. The discovery was made  by focusing a high-energy beam of X-rays on precise spots within the diamond  using a technique known as synchrotron X-ray diffraction. \n\nCalcium silicate is found in other forms, such as wollastonite in the crust and breyite in the middle and lower regions of the mantle. However, this version can exist only at very high pressure of around 200,000 times that found at Earth’s surface.\n\nSee also\n\n Perovskite (structure)\nList of minerals\n\nReferences \n\nPerovskites\nCalcium minerals",
    'In molecular biology, the calcipressin family of proteins negatively regulate calcineurin by direct binding. They are essential for the survival of T helper type 1 cells. Calcipressin 1 is a phosphoprotein that increases its capacity to inhibit calcineurin when phosphorylated at the conserved FLISPP motif; this phosphorylation also controls the half-life of calcipressin 1 by accelerating its degradation.\n\nIn humans, the Calcipressins family of proteins is derived from three genes. Calcipressin 1 is also known as modulatory calcineurin-interacting protein 1 (MCIP1), Adapt78 and Down syndrome critical region 1 (DSCR1). Calcipressin 2 is variously known as MCIP2, ZAKI-4 and DSCR1-like 1. Calcipressin 3 is also called MCIP3 and DSCR1-like 2.\n\nReferences\n\nProtein families',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 1.0

Training Details

Training Dataset

Unnamed Dataset

  • Size: 12,689 training samples
  • Columns: sentence_0, sentence_1, and sentence_2
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2
    type string string string
    details
    • min: 30 tokens
    • mean: 84.52 tokens
    • max: 198 tokens
    • min: 94 tokens
    • mean: 261.34 tokens
    • max: 818 tokens
    • min: 101 tokens
    • mean: 257.86 tokens
    • max: 752 tokens
  • Samples:
    sentence_0 sentence_1 sentence_2
    What type of model is the TaiWan Ionospheric Model (TWIM)?
    A. A one-dimensional thermal model of the Earth's crust
    B. A two-dimensional statistical model of atmospheric pressure
    C. A four-dimensional quantum model of particle interactions
    D. A three-dimensional numerical and phenomenological model of ionospheric electron density
    The TaiWan Ionospheric Model (TWIM) developed in 2008 is a three-dimensional numerical and phenomenological model of ionospheric electron density (Ne). The TWIM has been constructed from global distributed ionosonde foF2 and foE data and vertical Ne profiles retrieved from FormoSat3/COSMIC GPS radio occultation measurements. The TWIM consists of vertically fitted α-Chapman-type layers, with distinct F2, F1, E, and D layers, for which the layer parameters such as peak density, peak density height, and scale height are represented by surface spherical harmonics. These results are useful for providing reliable radio propagation predictions and in investigation of near-Earth space and large-scale Ne distribution with diurnal and seasonal variations, along with geographic features such as the equatorial anomaly. This way the continuity of Ne and its derivatives is also maintained for practical schemes for providing reliable radio propagation predictions.

    References

    The information in thi...
    Chandrasekhar–Kendall functions are the axisymmetric eigenfunctions of the curl operator, derived by Subrahmanyan Chandrasekhar and P.C. Kendall in 1957, in attempting to solve the force-free magnetic fields. The results were independently derived by both, but were agreed to publish the paper together.

    If the force-free magnetic field equation is written as with the assumption of divergence free field (), then the most general solution for axisymmetric case is

    where is a unit vector and the scalar function satisfies the Helmholtz equation, i.e.,

    The same equation also appears in fluid dynamics in Beltrami flows where, vorticity vector is parallel to the velocity vector, i.e., .

    Derivation

    Taking curl of the equation and using this same equation, we get

    .

    In the vector identity , we can set since it is solenoidal, which leads to a vector Helmholtz equation,

    .

    Every solution of above equation is not the solution of original equation, but the converse is true. If is a scal...
    What is the primary function of the protein encoded by the PFN2 gene?
    A. Facilitating lipid metabolism
    B. Regulating actin polymerization
    C. Encoding DNA repair enzymes
    D. Transporting oxygen in blood
    Profilin-2 is a protein that in humans is encoded by the PFN2 gene.

    The protein encoded by this gene is a ubiquitous actin monomer-binding protein belonging to the profilin family. It is thought to regulate actin polymerization in response to extracellular signals. There are two alternatively spliced transcript variants encoding different isoforms described for this gene.

    Interactions
    PFN2 has been shown to interact with ROCK1, Vasodilator-stimulated phosphoprotein, CCDC113 and FMNL1.

    References

    Further reading

    External links
    Stearoyl-CoA is a coenzyme involved in the metabolism of fatty acids. Stearoyl-CoA is an 18-carbon long fatty acyl-CoA chain that participates in an unsaturation reaction. The reaction is catalyzed by the enzyme stearoyl-CoA desaturase, which is located in the endoplasmic reticulum. It forms a cis-double bond between the ninth and tenth carbons within the chain to form the product oleoyl-CoA.

    References

    Bibliography

    Metabolism
    Thioesters of coenzyme A
    Which of the following statements is true regarding the properties of certain mathematical spaces and their relevance in functional analysis?
    A. Souslin spaces are always separable and complete metrizable.
    B. All Polish spaces are K-analytic but not all K-analytic spaces are Polish.
    C. The Borel graph theorem applies only to finite-dimensional spaces.
    D. The VEZF1 gene is involved in the continuity of linear maps in functional analysis.
    Vascular endothelial zinc finger 1 is a protein that in humans is encoded by the VEZF1 gene.

    Function

    Transcriptional regulatory proteins containing tandemly repeated zinc finger domains are thought to be involved in both normal and abnormal cellular proliferation and differentiation. ZNF161 is a C2H2-type zinc finger protein (Koyano-Nakagawa et al., 1994 [PubMed 8035792]). See MIM 603971 for general information on zinc finger proteins.

    References

    Further reading
    In mathematics, a trivial semigroup (a semigroup with one element) is a semigroup for which the cardinality of the underlying set is one. The number of distinct nonisomorphic semigroups with one element is one. If S = { a } is a semigroup with one element, then the Cayley table of S is

    {
  • Loss: main.TripletLossWithLogging with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss validation_cosine_accuracy
0.1259 100 - 1.0
0.2519 200 - 1.0
0.3778 300 - 1.0
0.5038 400 - 1.0
0.6297 500 0.1864 1.0
0.7557 600 - 1.0
0.8816 700 - 1.0
1.0 794 - 1.0

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 4.1.0
  • Transformers: 4.52.3
  • PyTorch: 2.7.0+cu126
  • Accelerate: 1.3.0
  • Datasets: 3.6.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLossWithLogging

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
30
Safetensors
Model size
149M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for anasse15/MNLP_M3_document_encoder

Finetuned
(12)
this model

Evaluation results