SentenceTransformer based on BAAI/bge-m3
This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-m3
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("axsu/bge-m3-arabic-test")
# Run inference
sentences = [
'Freedom of the press Media transparency International broadcasting Webster, David.',
'تزعم الحكومة أن الصحافة حرة.',
'في عام 2005، انتقل إلى فرنسا حيث بدأ مسيرته الاحترافية، لينضم إلى نادي الدرجة الأولى ليل.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Binary Classification
- Dataset:
silma-validation
- Evaluated with
BinaryClassificationEvaluator
Metric | Value |
---|---|
cosine_accuracy | 1.0 |
cosine_accuracy_threshold | 0.7209 |
cosine_f1 | 1.0 |
cosine_f1_threshold | 0.7209 |
cosine_precision | 1.0 |
cosine_recall | 1.0 |
cosine_ap | 1.0 |
cosine_mcc | 1.0 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 38 training samples
- Columns:
sentence_0
,sentence_1
, andlabel
- Approximate statistics based on the first 38 samples:
sentence_0 sentence_1 label type string string float details - min: 18 tokens
- mean: 29.37 tokens
- max: 47 tokens
- min: 10 tokens
- mean: 30.21 tokens
- max: 57 tokens
- min: 0.0
- mean: 0.5
- max: 1.0
- Samples:
sentence_0 sentence_1 label Unorganized militia – composing the Reserve Militia: every able-bodied man of at least 17 and under 45 years of age, not a member of the National Guard or Naval Militia.
باراغواي لديها الخدمة العسكرية الإجبارية، وجميع الذكور البالغ من العمر 18 عاما والذين تتراوح أعمارهم بين 17 عاما في سنة عيد ميلادهم 18 هي مسؤولة عن سنة واحدة من الخدمة الفعلية.
0.0
It is also grown in Southeast Asia where French colonists introduced it in the late 19th century.
كما يزرع في جنوب شرق آسيا حيث قدم المستعمرين الفرنسيين ذلك في أواخر القرن التاسع عشر.
1.0
She won the French title in the "Excellence" level and qualified for the "Unlimited" level qualifications in 2015, the highest level of the sport.
فازت باللقب الفرنسي في مستوى "التميز" وأصبحت بعد تلك المسابقة مؤهلة للحصول على مؤهلات مستوى "غير محدود" في عام 2015 ، وهو أعلى مستوى في هذه الرياضة.
1.0
- Loss:
ContrastiveLoss
with these parameters:{ "distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE", "margin": 0.5, "size_average": true }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsfp16
: Truemulti_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 8per_device_eval_batch_size
: 8per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 3max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | silma-validation_cosine_ap |
---|---|---|
-1 | -1 | 1.0 |
0.4 | 2 | 1.0 |
0.8 | 4 | 1.0 |
1.0 | 5 | 1.0 |
1.2 | 6 | 1.0 |
1.6 | 8 | 1.0 |
2.0 | 10 | 1.0 |
2.4 | 12 | 1.0 |
2.8 | 14 | 1.0 |
3.0 | 15 | 1.0 |
-1 | -1 | 1.0 |
Framework Versions
- Python: 3.11.12
- Sentence Transformers: 3.4.1
- Transformers: 4.51.1
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.5.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
ContrastiveLoss
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}
- Downloads last month
- 2
Model tree for axsu/bge-m3-arabic-test
Base model
BAAI/bge-m3Evaluation results
- Cosine Accuracy on silma validationself-reported1.000
- Cosine Accuracy Threshold on silma validationself-reported0.721
- Cosine F1 on silma validationself-reported1.000
- Cosine F1 Threshold on silma validationself-reported0.721
- Cosine Precision on silma validationself-reported1.000
- Cosine Recall on silma validationself-reported1.000
- Cosine Ap on silma validationself-reported1.000
- Cosine Mcc on silma validationself-reported1.000