SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Hgkang00/FT-label-consent-20")
# Run inference
sentences = [
'I engage in risky behaviors like reckless driving or reckless sexual encounters.',
'Symptoms during a manic episode include inflated self-esteem or grandiosity,increased goal-directed activity, or excessive involvement in risky activities.',
'Marked decrease in functioning in areas like work, interpersonal relations, or self-care since the onset of the disturbance.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Semantic Similarity
- Dataset:
FT_label
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.4628 |
spearman_cosine | 0.4076 |
pearson_manhattan | 0.4816 |
spearman_manhattan | 0.4067 |
pearson_euclidean | 0.4841 |
spearman_euclidean | 0.4076 |
pearson_dot | 0.4628 |
spearman_dot | 0.4076 |
pearson_max | 0.4841 |
spearman_max | 0.4076 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 33,800 training samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 11 tokens
- mean: 31.63 tokens
- max: 63 tokens
- min: 14 tokens
- mean: 25.22 tokens
- max: 41 tokens
- min: -1.0
- mean: -0.87
- max: 1.0
- Samples:
sentence1 sentence2 score Presence of one or more of the following intrusion symptoms associated with the traumatic event: recurrent distressing memories, dreams, flashbacks, psychological distress, or physiological reactions to cues of the traumatic event.
I avoid making phone calls, even to close friends or family, because I'm afraid of saying something wrong or sounding awkward.
0.0
The phobic object or situation almost always provokes immediate fear or anxiety.
I find it hard to stick to a consistent eating schedule, sometimes going days without feeling the need to eat at all.
-1.0
The fear or anxiety is out of proportion to the actual danger posed by the specific object or situation and to the sociocultural context.
I have difficulty going to places where I feel there are no immediate exits, such as cinemas or auditoriums, as the fear of being stuck or unable to escape escalates my anxiety.
-1.0
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Evaluation Dataset
Unnamed Dataset
- Size: 4,225 evaluation samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 11 tokens
- mean: 31.24 tokens
- max: 63 tokens
- min: 15 tokens
- mean: 24.86 tokens
- max: 41 tokens
- min: -1.0
- mean: -0.87
- max: 1.0
- Samples:
sentence1 sentence2 score Excessive anxiety and worry occurring more days than not for at least 6 months, about a number of events or activities such as work or school performance.
Simple activities like going for a walk or doing household chores feel like daunting tasks due to my low energy levels.
-1.0
The individual fears acting in a way or showing anxiety symptoms that will be negatively evaluated, leading to humiliation, embarrassment, rejection, or offense to others.
I often find myself mindlessly snacking throughout the day due to changes in my appetite.
-1.0
Persistent avoidance of stimuli associated with the trauma, evidenced by avoiding distressing memories, thoughts, or feelings, or external reminders of the event.
Simple activities like going for a walk or doing household chores feel like daunting tasks due to my low energy levels.
-1.0
- Loss:
CoSENTLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "pairwise_cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 128per_device_eval_batch_size
: 128num_train_epochs
: 20warmup_ratio
: 0.1
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 128per_device_eval_batch_size
: 128per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 20max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss | FT_label_spearman_cosine |
---|---|---|---|---|
1.0 | 265 | - | 6.9529 | 0.3450 |
2.0 | 530 | 7.5663 | 7.1002 | 0.4103 |
3.0 | 795 | - | 7.4786 | 0.4155 |
4.0 | 1060 | 5.5492 | 8.6710 | 0.4115 |
5.0 | 1325 | - | 10.3786 | 0.4056 |
6.0 | 1590 | 4.3991 | 10.4239 | 0.3987 |
7.0 | 1855 | - | 11.8681 | 0.4238 |
8.0 | 2120 | 3.5916 | 13.0752 | 0.4030 |
9.0 | 2385 | - | 12.8567 | 0.4240 |
10.0 | 2650 | 3.1139 | 12.4373 | 0.4270 |
11.0 | 2915 | - | 13.6725 | 0.4212 |
12.0 | 3180 | 2.6658 | 15.0521 | 0.4134 |
13.0 | 3445 | - | 15.4305 | 0.4114 |
14.0 | 3710 | 2.2024 | 15.5511 | 0.4060 |
15.0 | 3975 | - | 14.9427 | 0.4165 |
16.0 | 4240 | 1.8955 | 14.8399 | 0.4162 |
17.0 | 4505 | - | 15.0070 | 0.4170 |
18.0 | 4770 | 1.712 | 15.4417 | 0.4105 |
19.0 | 5035 | - | 15.6241 | 0.4086 |
20.0 | 5300 | 1.5088 | 15.6818 | 0.4076 |
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.0
- Transformers: 4.41.1
- PyTorch: 2.3.0+cu121
- Accelerate: 0.30.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
CoSENTLoss
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Hgkang00/FT-label-consent-20
Base model
sentence-transformers/all-MiniLM-L6-v2Evaluation results
- Pearson Cosine on FT labelself-reported0.463
- Spearman Cosine on FT labelself-reported0.408
- Pearson Manhattan on FT labelself-reported0.482
- Spearman Manhattan on FT labelself-reported0.407
- Pearson Euclidean on FT labelself-reported0.484
- Spearman Euclidean on FT labelself-reported0.408
- Pearson Dot on FT labelself-reported0.463
- Spearman Dot on FT labelself-reported0.408
- Pearson Max on FT labelself-reported0.484
- Spearman Max on FT labelself-reported0.408