vumichien's picture
Add new SentenceTransformer model with an onnx backend (#1)
1480a81 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:16199
  - loss:CustomBatchAllTripletLoss
widget:
  - source_sentence: 科目:コンクリート。名称:立上り壁コンクリート。
    sentences:
      - 科目:ユニット及びその他。名称:棚。
      - 科目:ユニット及びその他。名称:事務室スチールパーティション。
      - 科目:ユニット及びその他。名称:F-R#収納棚。
  - source_sentence: 科目:タイル。名称:段鼻タイル。
    sentences:
      - 科目:タイル。名称:巾木磁器質タイル。
      - 科目:タイル。名称:立上りタイルA。
      - 科目:タイル。名称:アプローチテラス立上り天端床タイルA。
  - source_sentence: 科目:ユニット及びその他。名称:#階F-WC#他パウダーカウンター。
    sentences:
      - 科目:ユニット及びその他。名称:便所フック(二段)。
      - 科目:ユニット及びその他。名称:テラス床ウッドデッキ。
      - 科目:ユニット及びその他。名称:フラットテラス床ウッドデッキ。
  - source_sentence: 科目:ユニット及びその他。名称:階数表示+停止階案内サイン。
    sentences:
      - 科目:ユニット及びその他。名称:エレベーターホール入口サイン。
      - 科目:ユニット及びその他。名称:場外離着陸用オイルトラップ。
      - 科目:ユニット及びその他。名称:器材カウンター。
  - source_sentence: 科目:ユニット及びその他。名称:階段内踊場階数サイン。
    sentences:
      - 科目:ユニット及びその他。名称:F-T#布団収納棚。
      - 科目:ユニット及びその他。名称:#F廊下#飾り棚。
      - 科目:ユニット及びその他。名称:F-#階理科室#収納棚。
pipeline_tag: sentence-similarity
library_name: sentence-transformers

SentenceTransformer

This is a sentence-transformers model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Detomo/cl-nagoya-sup-simcse-ja-nss-v1_1")
# Run inference
sentences = [
    '科目:ユニット及びその他。名称:階段内踊場階数サイン。',
    '科目:ユニット及びその他。名称:F-#階理科室#収納棚。',
    '科目:ユニット及びその他。名称:F-T#布団収納棚。',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 16,199 training samples
  • Columns: sentence and label
  • Approximate statistics based on the first 1000 samples:
    sentence label
    type string int
    details
    • min: 11 tokens
    • mean: 18.73 tokens
    • max: 72 tokens
    • 0: ~0.30%
    • 1: ~0.30%
    • 2: ~0.30%
    • 3: ~0.30%
    • 4: ~2.40%
    • 5: ~0.30%
    • 6: ~0.30%
    • 7: ~0.30%
    • 8: ~0.30%
    • 9: ~0.30%
    • 10: ~0.30%
    • 11: ~0.40%
    • 12: ~0.30%
    • 13: ~0.30%
    • 14: ~0.40%
    • 15: ~0.30%
    • 16: ~0.30%
    • 17: ~0.30%
    • 18: ~0.90%
    • 19: ~0.30%
    • 20: ~1.30%
    • 21: ~0.30%
    • 22: ~1.10%
    • 23: ~0.30%
    • 24: ~0.30%
    • 25: ~0.30%
    • 26: ~0.30%
    • 27: ~0.30%
    • 28: ~0.30%
    • 29: ~0.30%
    • 30: ~0.30%
    • 31: ~0.30%
    • 32: ~0.30%
    • 33: ~0.30%
    • 34: ~0.30%
    • 35: ~0.30%
    • 36: ~0.30%
    • 37: ~0.30%
    • 38: ~0.30%
    • 39: ~0.30%
    • 40: ~0.40%
    • 41: ~0.30%
    • 42: ~0.30%
    • 43: ~0.30%
    • 44: ~0.60%
    • 45: ~0.70%
    • 46: ~0.30%
    • 47: ~0.30%
    • 48: ~0.30%
    • 49: ~0.30%
    • 50: ~0.30%
    • 51: ~0.30%
    • 52: ~0.30%
    • 53: ~0.30%
    • 54: ~0.30%
    • 55: ~0.30%
    • 56: ~0.30%
    • 57: ~0.80%
    • 58: ~0.30%
    • 59: ~0.30%
    • 60: ~0.60%
    • 61: ~0.30%
    • 62: ~0.30%
    • 63: ~0.30%
    • 64: ~0.50%
    • 65: ~0.30%
    • 66: ~0.30%
    • 67: ~0.30%
    • 68: ~0.30%
    • 69: ~0.50%
    • 70: ~0.60%
    • 71: ~0.30%
    • 72: ~0.30%
    • 73: ~0.30%
    • 74: ~0.30%
    • 75: ~0.30%
    • 76: ~0.30%
    • 77: ~0.30%
    • 78: ~0.30%
    • 79: ~0.30%
    • 80: ~0.30%
    • 81: ~0.30%
    • 82: ~0.30%
    • 83: ~0.30%
    • 84: ~0.80%
    • 85: ~0.60%
    • 86: ~0.50%
    • 87: ~0.30%
    • 88: ~0.30%
    • 89: ~16.30%
    • 90: ~0.30%
    • 91: ~0.30%
    • 92: ~0.30%
    • 93: ~0.30%
    • 94: ~0.30%
    • 95: ~0.30%
    • 96: ~0.30%
    • 97: ~0.30%
    • 98: ~0.50%
    • 99: ~0.30%
    • 100: ~0.30%
    • 101: ~0.30%
    • 102: ~0.30%
    • 103: ~0.30%
    • 104: ~0.30%
    • 105: ~0.30%
    • 106: ~0.30%
    • 107: ~0.70%
    • 108: ~0.30%
    • 109: ~3.20%
    • 110: ~0.30%
    • 111: ~0.40%
    • 112: ~2.30%
    • 113: ~0.30%
    • 114: ~0.30%
    • 115: ~0.50%
    • 116: ~0.50%
    • 117: ~0.50%
    • 118: ~0.40%
    • 119: ~0.30%
    • 120: ~0.30%
    • 121: ~0.30%
    • 122: ~0.80%
    • 123: ~0.30%
    • 124: ~0.30%
    • 125: ~0.30%
    • 126: ~0.30%
    • 127: ~0.30%
    • 128: ~0.30%
    • 129: ~0.30%
    • 130: ~0.30%
    • 131: ~0.50%
    • 132: ~0.30%
    • 133: ~0.40%
    • 134: ~0.30%
    • 135: ~0.30%
    • 136: ~0.30%
    • 137: ~0.30%
    • 138: ~0.30%
    • 139: ~0.30%
    • 140: ~0.30%
    • 141: ~0.30%
    • 142: ~0.30%
    • 143: ~0.30%
    • 144: ~0.40%
    • 145: ~0.30%
    • 146: ~0.30%
    • 147: ~0.30%
    • 148: ~0.30%
    • 149: ~0.30%
    • 150: ~0.30%
    • 151: ~0.70%
    • 152: ~0.30%
    • 153: ~0.30%
    • 154: ~0.30%
    • 155: ~1.30%
    • 156: ~0.30%
    • 157: ~0.40%
    • 158: ~0.30%
    • 159: ~0.30%
    • 160: ~0.30%
    • 161: ~1.50%
    • 162: ~0.30%
    • 163: ~0.30%
    • 164: ~0.30%
    • 165: ~0.30%
    • 166: ~0.30%
    • 167: ~0.30%
    • 168: ~0.30%
    • 169: ~1.50%
    • 170: ~0.30%
    • 171: ~0.30%
    • 172: ~7.20%
    • 173: ~0.30%
    • 174: ~1.00%
    • 175: ~0.30%
    • 176: ~0.30%
    • 177: ~0.30%
    • 178: ~1.80%
    • 179: ~0.30%
    • 180: ~0.50%
    • 181: ~0.70%
    • 182: ~0.30%
    • 183: ~0.30%
  • Samples:
    sentence label
    科目:コンクリート。名称:免震基礎天端グラウト注入。 0
    科目:コンクリート。名称:免震基礎天端グラウト注入。 0
    科目:コンクリート。名称:免震基礎天端グラウト注入。 0
  • Loss: sentence_transformer_lib.custom_batch_all_trip_loss.CustomBatchAllTripletLoss

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 512
  • per_device_eval_batch_size: 512
  • learning_rate: 1e-05
  • weight_decay: 0.01
  • num_train_epochs: 250
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: group_by_label

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 512
  • per_device_eval_batch_size: 512
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 250
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: group_by_label
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss
4.125 100 0.0682
8.25 200 0.0745
12.375 300 0.0764
16.5 400 0.0778
20.625 500 0.077
24.75 600 0.0767
29.125 700 0.0738
33.25 800 0.0701
37.375 900 0.0677
41.5 1000 0.0689
45.625 1100 0.0661
49.75 1200 0.0677
54.125 1300 0.0627
58.25 1400 0.0629
62.375 1500 0.0625
66.5 1600 0.0655
70.625 1700 0.0645
74.75 1800 0.0595
79.125 1900 0.0608
83.25 2000 0.0614
87.375 2100 0.0567
91.5 2200 0.0612
95.625 2300 0.0599
99.75 2400 0.059
104.125 2500 0.0547
108.25 2600 0.0571
112.375 2700 0.0543
116.5 2800 0.0574
120.625 2900 0.0561
124.75 3000 0.0534
129.125 3100 0.0554
133.25 3200 0.0507
137.375 3300 0.0533
141.5 3400 0.05
145.625 3500 0.0569
149.75 3600 0.0551
154.125 3700 0.0558
158.25 3800 0.0539
162.375 3900 0.0498
166.5 4000 0.0512
170.625 4100 0.0481
174.75 4200 0.0492
179.125 4300 0.0513
183.25 4400 0.0474
187.375 4500 0.0491
191.5 4600 0.0513
195.625 4700 0.0453
199.75 4800 0.0453
204.125 4900 0.0489
208.25 5000 0.0481
212.375 5100 0.0498
216.5 5200 0.044
220.625 5300 0.0486
224.75 5400 0.0399
229.125 5500 0.0384
233.25 5600 0.0428
237.375 5700 0.0447
241.5 5800 0.0479
245.625 5900 0.0434
249.75 6000 0.0442

Framework Versions

  • Python: 3.11.12
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.5.2
  • Datasets: 3.5.0
  • Tokenizers: 0.21.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CustomBatchAllTripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}