SentenceTransformer based on sentence-transformers/all-distilroberta-v1

This is a sentence-transformers model finetuned from sentence-transformers/all-distilroberta-v1. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Bo8dady/finetuned2-College-embeddings")
# Run inference
sentences = [
    "Where can I find Abdel Badi Salem's email address?",
    'Dr. Abdel Badi Salem is part of the CS department and can be reached at [email protected].',
    "# **Abstract**\n\n## **Sports Analytics Overview**\nSports analytics has been successfully applied in sports like football and basketball. However, its application in soccer has been limited. Research in soccer analytics with Machine Learning techniques is limited and is mostly employed only for predictions. There is a need to find out if the application of Machine Learning can bring better and more insightful results in soccer analytics. In this thesis, we perform descriptive as well as predictive analysis of soccer matches and player performances.\n\n## **Football Rating Analysis**\nIn football, it is popular to rely on ratings by experts to assess a player's performance. However, the experts do not unravel the criteria they use for their rating. We attempt to identify the most important attributes of player's performance which determine the expert ratings. In this way we find the latent knowledge which the experts use to assign ratings to players. We performed a series of classifications with three different pruning strategies and an array of Machine Learning algorithms. The best results for predicting ratings using performance metrics had mean absolute error of 0.17. We obtained a list of most important performance metrics for each of the playing positions which approximates the attributes considered by the experts for assigning ratings. Then we find the most influential performance metrics of the players for determining the match outcome and we examine the extent to which the outcome is characterized by the performance attributes of the players. We found 34 performance attributes",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.1881
cosine_accuracy@3 0.4186
cosine_accuracy@5 0.5677
cosine_accuracy@10 0.8463
cosine_precision@1 0.1881
cosine_precision@3 0.1395
cosine_precision@5 0.1135
cosine_precision@10 0.0846
cosine_recall@1 0.1881
cosine_recall@3 0.4186
cosine_recall@5 0.5677
cosine_recall@10 0.8463
cosine_ndcg@10 0.4726
cosine_mrr@10 0.3588
cosine_map@100 0.3678

Information Retrieval

Metric Value
cosine_accuracy@1 0.1884
cosine_accuracy@3 0.4173
cosine_accuracy@5 0.567
cosine_accuracy@10 0.8456
cosine_precision@1 0.1884
cosine_precision@3 0.1391
cosine_precision@5 0.1134
cosine_precision@10 0.0846
cosine_recall@1 0.1884
cosine_recall@3 0.4173
cosine_recall@5 0.567
cosine_recall@10 0.8456
cosine_ndcg@10 0.4722
cosine_mrr@10 0.3586
cosine_map@100 0.3677

Information Retrieval

Metric Value
cosine_accuracy@1 0.1019
cosine_accuracy@3 0.3183
cosine_accuracy@5 0.5359
cosine_accuracy@10 0.8727
cosine_precision@1 0.1019
cosine_precision@3 0.1061
cosine_precision@5 0.1072
cosine_precision@10 0.0873
cosine_recall@1 0.1019
cosine_recall@3 0.3183
cosine_recall@5 0.5359
cosine_recall@10 0.8727
cosine_ndcg@10 0.4252
cosine_mrr@10 0.2893
cosine_map@100 0.2965

Training Details

Training Dataset

Unnamed Dataset

  • Size: 4,030 training samples
  • Columns: Question and chunk
  • Approximate statistics based on the first 1000 samples:
    Question chunk
    type string string
    details
    • min: 8 tokens
    • mean: 15.99 tokens
    • max: 31 tokens
    • min: 21 tokens
    • mean: 133.41 tokens
    • max: 512 tokens
  • Samples:
    Question chunk
    Could you share the link to the 2018 Distributed Computing final exam? The final exam for Distributed Computing course, offered by the computer science department, from 2018, is available at the following link: [https://drive.google.com/file/d/1YSzMeYStlFEztP0TloIcBqnfPr60o4ez/view?usp=sharing
    What databases exist for footstep recognition research? Abstract

    Documentation Overview
    This documentation reports an experimental analysis of footsteps as a biometric. The focus here is on information extracted from the time domain of signals collected from an array of piezoelectric sensors.

    Database Information
    Results are related to the largest footstep database collected to date, with almost 20,000 valid footstep signals and more than 120 persons, which is well beyond previous related databases.

    Feature Extraction
    Three feature approaches have been extracted, the popular ground reaction force (GRF), the spatial average and the upper and lower contours of the pressure signals.

    Experimental Results
    Experimental work is based on a verification mode with a holistic approach based on PCA and SVM, achieving results in the range of 5 to 15% equal error rate(EER) depending on the experimental conditions of quantity of data used in the reference models.
    Is there a maximum duration of study specified in the text? Topic: Duration of Study
    Summary: A bachelor's degree at the Faculty of Computers and Information requires at least four years of study, contingent on fulfilling degree requirements.
    Chunk: "Duration of study
    • The duration of study at the Faculty of Computers and Information to obtain a bachelor's degree is not less than 4 years, provided that the requirements for obtaining the scientific degree are completed."
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 575 evaluation samples
  • Columns: Question and chunk
  • Approximate statistics based on the first 575 samples:
    Question chunk
    type string string
    details
    • min: 9 tokens
    • mean: 15.97 tokens
    • max: 29 tokens
    • min: 21 tokens
    • mean: 134.83 tokens
    • max: 484 tokens
  • Samples:
    Question chunk
    Are there projects that use machine learning for automatic brain tumor identification? # Abstract

    ## Brain and Tumor Description
    A human brain is center of the nervous system; it is a collection of white mass of cells. A tumor of brain is collection of uncontrolled increasing of these cells abnormally found in different part of the brain namely Glial cells, neurons, lymphatic tissues, blood vessels, pituitary glands and other part of brain which lead to the cancer.

    ## Detection and Identification
    Manually it is not so easily possible to detect and identify the tumor. Programming division method by MRI is way to detect and identify the tumor. In order to give precise output a strong segmentation method is needed. Brain tumor identification is really challenging task in early stages of life. But now it became advanced with various machine learning and deep learning algorithms. Now a day's issue of brain tumor automatic identification is of great interest. In Order to detect the brain tumor of a patient we consider the data of patients like MRI images of a pat...
    Are there studies that propose solutions to the challenges of plant pest detection using deep learning? Abstract

    Introduction
    Identification of the plant diseases is the key to preventing the losses in the yield and quantity of the agricultural product. Disease diagnosis based on the detection of early symptoms is a usual threshold taken into account for integrated pest management strategies. through deep learning methodologies, plant diseases can be detected and diagnosed.

    Study Discussion
    On this basis, this study discusses possible challenges in practical applications of plant diseases and pests detection based on deep learning. In addition, possible solutions and research ideas are proposed for the challenges, and several suggestions are given. Finally, this study gives the analysis and prospect of the future trend of plant diseases and pests detection based on deep learning.

    5
    Is there a link available for the 2025 Calc 1 course exam? The final exam for the calculus1 course, offered by the general department, from 2025, is available at the following link: [https://drive.google.com/file/d/1g8iiGUo4HCUzNNWBJJrW1QZAsz-RYehw/view?usp=sharing].
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 1e-06
  • warmup_ratio: 0.2
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-06
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.2
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss ai-college-validation_cosine_ndcg@10
-1 -1 - - 0.4208
0.3968 100 0.1371 0.0785 0.4483
0.7937 200 0.0575 0.0357 0.4600
1.1905 300 0.0346 0.0286 0.4640
1.5873 400 0.0313 0.0264 0.4698
1.9841 500 0.0189 0.0256 0.4716
2.3810 600 0.021 0.0249 0.4703
2.7778 700 0.0264 0.0247 0.4726
-1 -1 - - 0.4252

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.1
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.5.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
82.1M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Bo8dady/finetuned2-College-embeddings

Finetuned
(35)
this model

Evaluation results