BERT base (UBC V2) trained on 500k Arabic NLI triplets

This is a sentence-transformers model finetuned from UBC-NLP/ARBERTv2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: UBC-NLP/ARBERTv2
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: ar
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'لماذا رائحة الغسيل بعد الغسيل',
    'كيفية التخلص من الرائحة: إذا تم اكتشافها على الفور ، فعادة ما يكون غسل الملابس مرة أخرى هو كل ما يلزم لإزالة الروائح الكريهة. إذا لم ينجح ذلك وما زلت تواجه مشاكل ، جرب إحدى الطرق التالية: اغسل مرة أخرى ولكن هذه المرة أضف كوبًا واحدًا من الخل إلى الحمولة (جنبًا إلى جنب مع منظف الغسيل).',
    'حافظ على ملابسك منتعشة وجافة بالمجفف الكهربائي من سيرز. عندما يدور يوم الغسيل ، يمكنك الاعتماد على الأداء الفعال للمجفف الكهربائي. يحتوي سيرز على مجففات تناسب ديكور أي غرفة غسيل. من الفولاذ المقاوم للصدأ إلى تشطيبات الأونيكس ، يسهل تنسيق هذا الجهاز الأنيق مع الغسالة التي تختارها.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.016 250 2.4152 -
0.032 500 1.24 -
0.048 750 1.0238 -
0.064 1000 0.929 -
0.08 1250 0.8268 -
0.096 1500 0.8117 -
0.112 1750 0.7486 -
0.128 2000 0.7053 -
0.144 2250 0.7131 -
0.16 2500 0.7003 -
0.176 2750 0.6735 -
0.192 3000 0.6548 -
0.208 3250 0.63 -
0.224 3500 0.6037 -
0.24 3750 0.6149 -
0.256 4000 0.5545 -
0.272 4250 0.5385 -
0.288 4500 0.5413 -
0.304 4750 0.5217 -
0.32 5000 0.4884 0.4664
0.336 5250 0.5052 -
0.352 5500 0.5239 -
0.368 5750 0.5145 -
0.384 6000 0.4707 -
0.4 6250 0.4514 -
0.416 6500 0.42 -
0.432 6750 0.4747 -
0.448 7000 0.4798 -
0.464 7250 0.4443 -
0.48 7500 0.4402 -
0.496 7750 0.411 -
0.512 8000 0.4546 -
0.528 8250 0.4428 -
0.544 8500 0.4293 -
0.56 8750 0.4052 -
0.576 9000 0.3993 -
0.592 9250 0.3971 -
0.608 9500 0.4246 -
0.624 9750 0.3995 -
0.64 10000 0.4087 0.3428
0.656 10250 0.3955 -
0.672 10500 0.3878 -
0.688 10750 0.3896 -
0.704 11000 0.3535 -
0.72 11250 0.3809 -
0.736 11500 0.3502 -
0.752 11750 0.3558 -
0.768 12000 0.3626 -
0.784 12250 0.3607 -
0.8 12500 0.3775 -
0.816 12750 0.3458 -
0.832 13000 0.3498 -
0.848 13250 0.3618 -
0.864 13500 0.3617 -
0.88 13750 0.3529 -
0.896 14000 0.3285 -
0.912 14250 0.3379 -
0.928 14500 0.336 -
0.944 14750 0.3402 -
0.96 15000 0.3391 0.2951
0.976 15250 0.3663 -
0.992 15500 0.3461 -

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.2.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.0
  • Accelerate: 0.34.2
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

Matryoshka2dLoss

@misc{li20242d,
    title={2D Matryoshka Sentence Embeddings},
    author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li},
    year={2024},
    eprint={2402.14776},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
23
Safetensors
Model size
163M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for akhooli/sbert_ar_nli_500k_ubc_norm

Base model

UBC-NLP/ARBERTv2
Finetuned
(5)
this model
Finetunes
1 model