SentenceTransformer based on Alibaba-NLP/gte-modernbert-base
This is a sentence-transformers model finetuned from Alibaba-NLP/gte-modernbert-base. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Alibaba-NLP/gte-modernbert-base
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("anasse15/MNLP_M3_document_encoder")
# Run inference
sentences = [
"What is the primary role of davemaoite in Earth's lower mantle?\nA. It is the most abundant mineral in the crust.\nB. It acts as a catalyst for mineral formation.\nC. It serves as a primary source of diamonds.\nD. It contributes to heat flow through radioactive decay.",
"Davemaoite is a high-pressure calcium silicate perovskite (CaSiO3) mineral with a distinctive cubic crystal structure. It is named after geophysicist Ho-kwang (Dave) Mao, who pioneered in many discoveries in high-pressure geochemistry and geophysics. \n\nIt is one of three main minerals in Earth’s lower mantle, making up around 5–7% of the material there. Significantly, davemaoite can host uranium and thorium, radioactive isotopes which produce heat through radioactive decay and contribute greatly to heating within this region giving the material a major role in how heat flows deep below the earth's surface.\n\nDavemaoite has been artificially synthesized in the laboratory, but was thought to be too extreme to exist in the Earth's crust. Then in 2021, the mineral was discovered as specks within a diamond that formed between 660 and 900 km beneath the Earth's surface, within the mantle. The diamond had been extracted from the Orapa diamond mine in Botswana. The discovery was made by focusing a high-energy beam of X-rays on precise spots within the diamond using a technique known as synchrotron X-ray diffraction. \n\nCalcium silicate is found in other forms, such as wollastonite in the crust and breyite in the middle and lower regions of the mantle. However, this version can exist only at very high pressure of around 200,000 times that found at Earth’s surface.\n\nSee also\n\n Perovskite (structure)\nList of minerals\n\nReferences \n\nPerovskites\nCalcium minerals",
'In molecular biology, the calcipressin family of proteins negatively regulate calcineurin by direct binding. They are essential for the survival of T helper type 1 cells. Calcipressin 1 is a phosphoprotein that increases its capacity to inhibit calcineurin when phosphorylated at the conserved FLISPP motif; this phosphorylation also controls the half-life of calcipressin 1 by accelerating its degradation.\n\nIn humans, the Calcipressins family of proteins is derived from three genes. Calcipressin 1 is also known as modulatory calcineurin-interacting protein 1 (MCIP1), Adapt78 and Down syndrome critical region 1 (DSCR1). Calcipressin 2 is variously known as MCIP2, ZAKI-4 and DSCR1-like 1. Calcipressin 3 is also called MCIP3 and DSCR1-like 2.\n\nReferences\n\nProtein families',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Triplet
- Dataset:
validation
- Evaluated with
TripletEvaluator
Metric | Value |
---|---|
cosine_accuracy | 1.0 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 12,689 training samples
- Columns:
sentence_0
,sentence_1
, andsentence_2
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 sentence_2 type string string string details - min: 30 tokens
- mean: 84.52 tokens
- max: 198 tokens
- min: 94 tokens
- mean: 261.34 tokens
- max: 818 tokens
- min: 101 tokens
- mean: 257.86 tokens
- max: 752 tokens
- Samples:
sentence_0 sentence_1 sentence_2 What type of model is the TaiWan Ionospheric Model (TWIM)?
A. A one-dimensional thermal model of the Earth's crust
B. A two-dimensional statistical model of atmospheric pressure
C. A four-dimensional quantum model of particle interactions
D. A three-dimensional numerical and phenomenological model of ionospheric electron densityThe TaiWan Ionospheric Model (TWIM) developed in 2008 is a three-dimensional numerical and phenomenological model of ionospheric electron density (Ne). The TWIM has been constructed from global distributed ionosonde foF2 and foE data and vertical Ne profiles retrieved from FormoSat3/COSMIC GPS radio occultation measurements. The TWIM consists of vertically fitted α-Chapman-type layers, with distinct F2, F1, E, and D layers, for which the layer parameters such as peak density, peak density height, and scale height are represented by surface spherical harmonics. These results are useful for providing reliable radio propagation predictions and in investigation of near-Earth space and large-scale Ne distribution with diurnal and seasonal variations, along with geographic features such as the equatorial anomaly. This way the continuity of Ne and its derivatives is also maintained for practical schemes for providing reliable radio propagation predictions.
References
The information in thi...Chandrasekhar–Kendall functions are the axisymmetric eigenfunctions of the curl operator, derived by Subrahmanyan Chandrasekhar and P.C. Kendall in 1957, in attempting to solve the force-free magnetic fields. The results were independently derived by both, but were agreed to publish the paper together.
If the force-free magnetic field equation is written as with the assumption of divergence free field (), then the most general solution for axisymmetric case is
where is a unit vector and the scalar function satisfies the Helmholtz equation, i.e.,
The same equation also appears in fluid dynamics in Beltrami flows where, vorticity vector is parallel to the velocity vector, i.e., .
Derivation
Taking curl of the equation and using this same equation, we get
.
In the vector identity , we can set since it is solenoidal, which leads to a vector Helmholtz equation,
.
Every solution of above equation is not the solution of original equation, but the converse is true. If is a scal...What is the primary function of the protein encoded by the PFN2 gene?
A. Facilitating lipid metabolism
B. Regulating actin polymerization
C. Encoding DNA repair enzymes
D. Transporting oxygen in bloodProfilin-2 is a protein that in humans is encoded by the PFN2 gene.
The protein encoded by this gene is a ubiquitous actin monomer-binding protein belonging to the profilin family. It is thought to regulate actin polymerization in response to extracellular signals. There are two alternatively spliced transcript variants encoding different isoforms described for this gene.
Interactions
PFN2 has been shown to interact with ROCK1, Vasodilator-stimulated phosphoprotein, CCDC113 and FMNL1.
References
Further reading
External linksStearoyl-CoA is a coenzyme involved in the metabolism of fatty acids. Stearoyl-CoA is an 18-carbon long fatty acyl-CoA chain that participates in an unsaturation reaction. The reaction is catalyzed by the enzyme stearoyl-CoA desaturase, which is located in the endoplasmic reticulum. It forms a cis-double bond between the ninth and tenth carbons within the chain to form the product oleoyl-CoA.
References
Bibliography
Metabolism
Thioesters of coenzyme AWhich of the following statements is true regarding the properties of certain mathematical spaces and their relevance in functional analysis?
A. Souslin spaces are always separable and complete metrizable.
B. All Polish spaces are K-analytic but not all K-analytic spaces are Polish.
C. The Borel graph theorem applies only to finite-dimensional spaces.
D. The VEZF1 gene is involved in the continuity of linear maps in functional analysis.Vascular endothelial zinc finger 1 is a protein that in humans is encoded by the VEZF1 gene.
Function
Transcriptional regulatory proteins containing tandemly repeated zinc finger domains are thought to be involved in both normal and abnormal cellular proliferation and differentiation. ZNF161 is a C2H2-type zinc finger protein (Koyano-Nakagawa et al., 1994 [PubMed 8035792]). See MIM 603971 for general information on zinc finger proteins.
References
Further readingIn mathematics, a trivial semigroup (a semigroup with one element) is a semigroup for which the cardinality of the underlying set is one. The number of distinct nonisomorphic semigroups with one element is one. If S = { a } is a semigroup with one element, then the Cayley table of S is
{ - Loss:
main.TripletLossWithLogging
with these parameters:{ "distance_metric": "TripletDistanceMetric.EUCLIDEAN", "triplet_margin": 5 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16num_train_epochs
: 1fp16
: Truemulti_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | validation_cosine_accuracy |
---|---|---|---|
0.1259 | 100 | - | 1.0 |
0.2519 | 200 | - | 1.0 |
0.3778 | 300 | - | 1.0 |
0.5038 | 400 | - | 1.0 |
0.6297 | 500 | 0.1864 | 1.0 |
0.7557 | 600 | - | 1.0 |
0.8816 | 700 | - | 1.0 |
1.0 | 794 | - | 1.0 |
Framework Versions
- Python: 3.12.8
- Sentence Transformers: 4.1.0
- Transformers: 4.52.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.3.0
- Datasets: 3.6.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
TripletLossWithLogging
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 30
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for anasse15/MNLP_M3_document_encoder
Base model
answerdotai/ModernBERT-base
Finetuned
Alibaba-NLP/gte-modernbert-base