SentenceTransformer based on Alibaba-NLP/gte-base-en-v1.5

This is a sentence-transformers model finetuned from Alibaba-NLP/gte-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Alibaba-NLP/gte-base-en-v1.5
  • Maximum Sequence Length: 32 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 32, 'do_lower_case': False}) with Transformer model: NewModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the ๐Ÿค— Hub
model = SentenceTransformer("albertus-sussex/veriscrape-sbert-camera-wo-ref-deepseek-chat")
# Run inference
sentences = [
    ': Bell + Howell',
    ': Olympus',
    'Bell & Howell Z10T ZoomTouch 10MP Touchscreen Digital Camera with Movie Mode, 3x Optical Zoom Lens, 3.0" LCD Screen, USB 2.0 - Silver',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Triplet

Metric Value
cosine_accuracy 0.9742

Silhouette

  • Evaluated with veriscrape.training.SilhouetteEvaluator
Metric Value
silhouette_cosine 0.8823
silhouette_euclidean 0.7565

Triplet

Metric Value
cosine_accuracy 0.9733

Silhouette

  • Evaluated with veriscrape.training.SilhouetteEvaluator
Metric Value
silhouette_cosine 0.8747
silhouette_euclidean 0.7523

Training Details

Training Dataset

Unnamed Dataset

  • Size: 6,964 training samples
  • Columns: anchor, positive, negative, pos_attr_name, neg_attr_name, and website_id
  • Approximate statistics based on the first 1000 samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    type string string string string string int
    details
    • min: 3 tokens
    • mean: 12.74 tokens
    • max: 32 tokens
    • min: 3 tokens
    • mean: 12.66 tokens
    • max: 32 tokens
    • min: 3 tokens
    • mean: 12.28 tokens
    • max: 32 tokens
    • min: 3 tokens
    • mean: 3.0 tokens
    • max: 3 tokens
    • min: 3 tokens
    • mean: 3.0 tokens
    • max: 3 tokens
    • 0: ~9.80%
    • 1: ~9.00%
    • 2: ~6.80%
    • 3: ~11.00%
    • 4: ~10.20%
    • 5: ~9.60%
    • 6: ~10.80%
    • 7: ~10.70%
    • 8: ~10.30%
    • 9: ~11.80%
  • Samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    Casio Computer Co., Ltd Eastman Kodak Company $324.99 manufacturer price 9
    $188.99 $96.99 Panasonic price manufacturer 9
    GE J1250 Point & Shoot Digital Camera - Black Pentax K-r 12.4 Megapixel Digital SLR Camera (Body with Lens Kit) - 18 mm-55 mm - Red $299.00 model price 2
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 774 evaluation samples
  • Columns: anchor, positive, negative, pos_attr_name, neg_attr_name, and website_id
  • Approximate statistics based on the first 774 samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    type string string string string string int
    details
    • min: 3 tokens
    • mean: 13.09 tokens
    • max: 32 tokens
    • min: 3 tokens
    • mean: 13.14 tokens
    • max: 32 tokens
    • min: 3 tokens
    • mean: 12.73 tokens
    • max: 32 tokens
    • min: 3 tokens
    • mean: 3.0 tokens
    • max: 3 tokens
    • min: 3 tokens
    • mean: 3.0 tokens
    • max: 3 tokens
    • 0: ~10.98%
    • 1: ~11.11%
    • 2: ~5.30%
    • 3: ~10.59%
    • 4: ~10.34%
    • 5: ~9.04%
    • 6: ~10.59%
    • 7: ~10.85%
    • 8: ~12.02%
    • 9: ~9.17%
  • Samples:
    anchor positive negative pos_attr_name neg_attr_name website_id
    Eastman Kodak Company Panasonic Kodak EasyShare C143 12 Megapixel Compact Camera - Silver manufacturer model 9
    SL605 - Digital camera - compact - 12.2 Mpix - optical zoom: 5 x - supported memory: SD, SDHC - black (EC-SL605ZBPBUS) EASYSHARE C195 - Digital camera - compact - 14.0 Mpix - optical zoom: 5 x - supported memory: SD, SDHC - silver (8770414) $184.99 model price 7
    $186.99 $96.99 10.2MP Cyber-shot TX7 Digital Camera - Red (DSCTX7/R) price model 7
  • Loss: TripletLoss with these parameters:
    {
        "distance_metric": "TripletDistanceMetric.EUCLIDEAN",
        "triplet_margin": 5
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • num_train_epochs: 5
  • warmup_ratio: 0.1

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss cosine_accuracy silhouette_cosine
-1 -1 - - 0.9057 0.3675
1.0 55 0.3791 0.2219 0.9703 0.8847
2.0 110 0.1751 0.2184 0.9703 0.8844
3.0 165 0.1592 0.2365 0.9703 0.8832
4.0 220 0.1416 0.2636 0.9742 0.8812
5.0 275 0.1219 0.2560 0.9742 0.8823
-1 -1 - - 0.9733 0.8747

Framework Versions

  • Python: 3.10.16
  • Sentence Transformers: 4.0.1
  • Transformers: 4.45.2
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.6.0
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
3
Safetensors
Model size
137M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for albertus-sussex/veriscrape-sbert-camera-wo-ref-deepseek-chat

Finetuned
(824)
this model

Evaluation results