metadata
base_model: aubmindlab/bert-base-arabertv02
datasets:
- akhooli/arabic-triplets-1m-curated-sims-len
language:
- ar
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:75000
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
Arabic-SBERT-100K
This is a sentence-transformers model finetuned from aubmindlab/bert-base-arabertv02. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. This model is trained on 100K samples filtered from the akhooli/arabic-triplets-1m-curated-sims-len dataset with 75K training and 25K validation. Trained for 5 epochs, with final training loss of 0.133 (using MatryoshkaLoss).
The rest of this file is auto generated.
========================================================================
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: aubmindlab/bert-base-arabertv02
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'ู
ุง ูู ููุน ุงูุฏููู ุงูู
ูุฌูุฏุฉ ูู ุงูุฃูููุงุฏู',
'ุญูุงูู 15 ูู ุงูู
ุงุฆุฉ ู
ู ุงูุฏููู ูู ุงูุฃูููุงุฏู ู
ุดุจุนุฉ ุ ู
ุน ูู ููุจ ูุงุญุฏ ู
ู ุงูุฃูููุงุฏู ุงูู
ูุฑูู
ูุญุชูู ุนูู 3.2 ุฌุฑุงู
ู
ู ุงูุฏููู ุงูู
ุดุจุนุฉ ุ ููู ู
ุง ูู
ุซู 16 ูู ุงูู
ุงุฆุฉ ู
ู DV ุงูุจุงูุบ 20 ุฌุฑุงู
ูุง. ุชุญุชูู ุงูุฃูููุงุฏู ูู ุงูุบุงูุจ ุนูู ุฏููู ุฃุญุงุฏูุฉ ุบูุฑ ู
ุดุจุนุฉ ุ ู
ุน 67 ูู ุงูู
ุงุฆุฉ ู
ู ุฅุฌู
ุงูู ุงูุฏููู ุ ุฃู 14.7 ุฌุฑุงู
ูุง ููู ููุจ ู
ูุฑูู
ุ ููุชููู ู
ู ูุฐุง ุงูููุน ู
ู ุงูุฏููู.',
'ูู
ูู ุฃู ูุคุฏู ุงุฑุชูุงุน ู
ุณุชูู ุงูุฏููู ุงูุซูุงุซูุฉ ุ ููู ููุน ู
ู ุงูุฏููู (ุงูุฏููู) ูู ุงูุฏู
ุ ุฅูู ุฒูุงุฏุฉ ุฎุทุฑ ุงูุฅุตุงุจุฉ ุจุฃู
ุฑุงุถ ุงูููุจ ุ ููู
ูู ุฃู ูุคุฏู ุชูููุฑ ู
ุณุชูู ู
ุฑุชูุน ู
ู ุงูุฏููู ุงูุซูุงุซูุฉ ุ ููู ููุน ู
ู ุงูุฏููู (ุงูุฏููู) ูู ุงูุฏู
ุ ุฅูู ุฒูุงุฏุฉ ุฎุทุฑ ุงูุฅุตุงุจุฉ ุจุฃู
ุฑุงุถ ุงูููุจ. ู
ุฑุถ.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 75,000 training samples
- Columns:
anchor
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 4 tokens
- mean: 12.88 tokens
- max: 58 tokens
- min: 4 tokens
- mean: 13.74 tokens
- max: 126 tokens
- min: 4 tokens
- mean: 13.38 tokens
- max: 146 tokens
- Samples:
anchor positive negative ูู ุชุดุงุฌุฑ (ุณู ุฅุณ ูููุณ) ู (ุฌู ุขุฑ ุขุฑ ุชููููู) ุ ุฅู ูุงู ุงูุฃู ุฑ ูุฐููุ ูู ุง ูู ุงูุณุจุจุ
ูู ุตุญูุญ ุฃู (ุณู ุฅุณ ูููุณ) ู (ุชููููู) ุชุดุงุฌุฑุงุ
ู ุง ูู ุฃูุถู ุงููุชุจ ููุฏุฑุงุณุฉ ูู ุงูุฌุงู ุนุฉุ
ู ุง ูู ุงุนุฑุงุถ ููุฑ ุงูุฏู ุ
ู ุง ูู ุงุนุฑุงุถ ุงูุงููู ูุงุ
ููู ุงุญุถุฑ ูููุฉ ุงูุนุณูุ
ู ู ุณุชุตูุช ููุ ุฏููุงูุฏ ุชุฑุงู ุจ ุฃู ูููุงุฑู ููููุชููุ
ูู ุชุคูุฏูู ุฏููุงูุฏ ุชุฑุงู ุจ ุฃู ูููุงุฑู ููููุชููุ ูู ุงุฐุงุ
ููู ุฃุชุบูุจ ุนูู ุฅุฏู ุงู ุงูู ูุงุฏ ุงูุฅุจุงุญูุฉุ
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Evaluation Dataset
Unnamed Dataset
- Size: 25,000 evaluation samples
- Columns:
anchor
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
anchor positive negative type string string string details - min: 4 tokens
- mean: 12.6 tokens
- max: 70 tokens
- min: 4 tokens
- mean: 14.82 tokens
- max: 239 tokens
- min: 4 tokens
- mean: 13.78 tokens
- max: 128 tokens
- Samples:
anchor positive negative ูุนู , ูุนู , ุฃู ุฑุฃูุช " ุชุดูู ุง ุจุงุฑุง ุฏูุณู "
ูุนู ุ ุฃู "ุชุดูู ุง ุจุงุฑุง ุฏูุณู" ูุงูุช ุชูู ุงูุชู ุดุงูุฏุชูุง
ุฃูุง ูู ุฃุฑู "ุชุดูู ุง ุจุงุฑุง ุฏูุณู".
ุฑุฌู ูุงู ุฑุฃุฉ ูุฌูุณุงู ุนูู ุงูุดุงุทุฆ ุจููู ุง ุชุบุฑุจ ุงูุดู ุณ
ููุงู ุฑุฌู ูุงู ุฑุฃุฉ ูุฌูุณุงู ุนูู ุงูุดุงุทุฆ
ุฅููู ูุดุงูุฏูู ุดุฑูู ุงูุดู ุณ
ููู ุฃุณูุทุฑ ุนูู ุบุถุจูุ
ู ุง ูู ุฃูุถู ุทุฑููุฉ ููุณูุทุฑุฉ ุนูู ุงูุบุถุจุ
ููู ุฃุนุฑู ุฅู ูุงูุช ุฒูุฌุชู ุชุฎููููุ
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 5warmup_ratio
: 0.1fp16
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 5max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Truefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss |
---|---|---|---|
0.2133 | 500 | 1.4163 | 0.3134 |
0.4266 | 1000 | 0.3306 | 0.1912 |
0.6399 | 1500 | 0.2263 | 0.1527 |
0.8532 | 2000 | 0.1818 | 0.1297 |
1.0666 | 2500 | 0.1658 | 0.1167 |
1.2799 | 3000 | 0.1139 | 0.1040 |
1.4932 | 3500 | 0.0808 | 0.1018 |
1.7065 | 4000 | 0.0692 | 0.0959 |
1.9198 | 4500 | 0.058 | 0.0958 |
2.1331 | 5000 | 0.0653 | 0.0882 |
2.3464 | 5500 | 0.0503 | 0.0912 |
2.5597 | 6000 | 0.0338 | 0.0970 |
2.7730 | 6500 | 0.0363 | 0.0906 |
2.9863 | 7000 | 0.0375 | 0.0856 |
3.1997 | 7500 | 0.0401 | 0.0879 |
3.4130 | 8000 | 0.031 | 0.0848 |
3.6263 | 8500 | 0.0255 | 0.0938 |
3.8396 | 9000 | 0.0239 | 0.0858 |
4.0529 | 9500 | 0.0305 | 0.0840 |
4.2662 | 10000 | 0.0281 | 0.0833 |
4.4795 | 10500 | 0.0174 | 0.0840 |
4.6928 | 11000 | 0.0216 | 0.0882 |
4.9061 | 11500 | 0.022 | 0.0866 |
Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.42.3
- PyTorch: 2.1.2
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}