dev7halo commited on
Commit
bba96c5
1 Parent(s): 3788969

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,451 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: []
3
+ library_name: sentence-transformers
4
+ tags:
5
+ - sentence-transformers
6
+ - sentence-similarity
7
+ - feature-extraction
8
+ - generated_from_trainer
9
+ - dataset_size:600313
10
+ - loss:MultipleNegativesRankingLoss
11
+ - loss:CosineSimilarityLoss
12
+ base_model: klue/roberta-base
13
+ datasets: []
14
+ metrics:
15
+ - pearson_cosine
16
+ - spearman_cosine
17
+ - pearson_manhattan
18
+ - spearman_manhattan
19
+ - pearson_euclidean
20
+ - spearman_euclidean
21
+ - pearson_dot
22
+ - spearman_dot
23
+ - pearson_max
24
+ - spearman_max
25
+ widget:
26
+ - source_sentence: 사람은 무언가를 창조했다.
27
+ sentences:
28
+ - 한 남자가 악한 시기의 소동을 재현한다.
29
+ - 한 사람이 고속도로에서 오토바이를 타고 있다
30
+ - 개 두 마리가 있다.
31
+ - source_sentence: 모리스는 더 많은 것을 얻을 수 있을 만큼, 표면을 관통하는 독을 찾기 위해 조금 더 깊이 들어갔을 만큼 레우처와
32
+ 가까웠다.
33
+ sentences:
34
+ - 키가 크다는 뜻인가요, 짧다는 뜻인가요?
35
+ - 모리스와 르우히터는 긴장된 관계를 맺고 있었고, 몇 년 동안 이야기를 나누지 않았다.
36
+ - 모리스는 루치터로부터 더 많은 정보를 얻을 수 있었어야 했다.
37
+ - source_sentence: 나는 확신할 수 없지만 그것이 전부라고 생각한다.
38
+ sentences:
39
+ - 음-흠 음, 내 생각엔 그게 다인 것 같아.
40
+ - 대사를 좀 더 암송해 주십시오.
41
+ - FDA는 1997년 6월 1일까지 발효일을 연장했으며 그 후 1년 동안 설계 제어 요건을 규제하지 않을 것입니다.
42
+ - source_sentence: 트램을 이용해 다른 스팟으로의 이동도 좋은 편입니다.
43
+ sentences:
44
+ - 알려줘. 이번 태풍 진행 방향이 어디인지.
45
+ - 사진으로 보는 것 만큼이나 좋은 숙소입니다
46
+ - 슬플 때는 빗속을 달려봐. 참는건 안돼.
47
+ - source_sentence: 한국기후·환경네트워크는 콘텐츠 기획 및 개발과 인센티브 제공 등 앱 운영을 주관하고 한국환경공단, 한국환경산업기술원은
48
+ 앱 제작물 개발과 운영예산 등을 지원한다.
49
+ sentences:
50
+ - 한국기후환경네트워크는 콘텐츠 기획, 개발, 인센티브 등 앱 운영을 관리하고, 한국환경공단과 한국환경산업기술원은 앱 개발 및 운영 예산을 지원합니다.
51
+ - 그 수치는 2015년 메르스의 30퍼센트 감소에서 두 배 이상 증가했습니다.
52
+ - 두 사람이 집에 머무는 데 불편함이 없습니다.
53
+ pipeline_tag: sentence-similarity
54
+ model-index:
55
+ - name: SentenceTransformer based on klue/roberta-base
56
+ results:
57
+ - task:
58
+ type: semantic-similarity
59
+ name: Semantic Similarity
60
+ dataset:
61
+ name: sts dev
62
+ type: sts-dev
63
+ metrics:
64
+ - type: pearson_cosine
65
+ value: 0.9624678457183204
66
+ name: Pearson Cosine
67
+ - type: spearman_cosine
68
+ value: 0.9261175261590585
69
+ name: Spearman Cosine
70
+ - type: pearson_manhattan
71
+ value: 0.9524817581692175
72
+ name: Pearson Manhattan
73
+ - type: spearman_manhattan
74
+ value: 0.9224105408224054
75
+ name: Spearman Manhattan
76
+ - type: pearson_euclidean
77
+ value: 0.9524895420144286
78
+ name: Pearson Euclidean
79
+ - type: spearman_euclidean
80
+ value: 0.922316316791248
81
+ name: Spearman Euclidean
82
+ - type: pearson_dot
83
+ value: 0.9525268146709863
84
+ name: Pearson Dot
85
+ - type: spearman_dot
86
+ value: 0.9109078605792271
87
+ name: Spearman Dot
88
+ - type: pearson_max
89
+ value: 0.9624678457183204
90
+ name: Pearson Max
91
+ - type: spearman_max
92
+ value: 0.9261175261590585
93
+ name: Spearman Max
94
+ ---
95
+
96
+ # SentenceTransformer based on klue/roberta-base
97
+
98
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [klue/roberta-base](https://huggingface.co/klue/roberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
99
+
100
+ ## Model Details
101
+
102
+ ### Model Description
103
+ - **Model Type:** Sentence Transformer
104
+ - **Base model:** [klue/roberta-base](https://huggingface.co/klue/roberta-base) <!-- at revision 02f94ba5e3fcb7e2a58a390b8639b0fac974a8da -->
105
+ - **Maximum Sequence Length:** 128 tokens
106
+ - **Output Dimensionality:** 768 tokens
107
+ - **Similarity Function:** Cosine Similarity
108
+ <!-- - **Training Dataset:** Unknown -->
109
+ <!-- - **Language:** Unknown -->
110
+ <!-- - **License:** Unknown -->
111
+
112
+ ### Model Sources
113
+
114
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
115
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
116
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
117
+
118
+ ### Full Model Architecture
119
+
120
+ ```
121
+ SentenceTransformer(
122
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: RobertaModel
123
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
124
+ )
125
+ ```
126
+
127
+ ## Usage
128
+
129
+ ### Direct Usage (Sentence Transformers)
130
+
131
+ First install the Sentence Transformers library:
132
+
133
+ ```bash
134
+ pip install -U sentence-transformers
135
+ ```
136
+
137
+ Then you can load this model and run inference.
138
+ ```python
139
+ from sentence_transformers import SentenceTransformer
140
+
141
+ # Download from the 🤗 Hub
142
+ model = SentenceTransformer("dev7halo/Ko-sroberta-base-multitask")
143
+ # Run inference
144
+ sentences = [
145
+ '한국기후·환경네트워크는 콘텐츠 기획 및 개발과 인센티브 제공 등 앱 운영을 주관하고 한국환경공단, 한국환경산업기술원은 앱 제작물 개발과 운영예산 등을 지원한다.',
146
+ '한국기후환경네트워크는 콘텐츠 기획, 개발, 인센티브 등 앱 운영을 관리하고, 한국환경공단과 한국환경산업기술원은 앱 개발 및 운영 예산을 지원합니다.',
147
+ '그 수치는 2015년 메르스의 30퍼센트 감소에서 두 배 이상 증가했습니다.',
148
+ ]
149
+ embeddings = model.encode(sentences)
150
+ print(embeddings.shape)
151
+ # [3, 768]
152
+
153
+ # Get the similarity scores for the embeddings
154
+ similarities = model.similarity(embeddings, embeddings)
155
+ print(similarities.shape)
156
+ # [3, 3]
157
+ ```
158
+
159
+ <!--
160
+ ### Direct Usage (Transformers)
161
+
162
+ <details><summary>Click to see the direct usage in Transformers</summary>
163
+
164
+ </details>
165
+ -->
166
+
167
+ <!--
168
+ ### Downstream Usage (Sentence Transformers)
169
+
170
+ You can finetune this model on your own dataset.
171
+
172
+ <details><summary>Click to expand</summary>
173
+
174
+ </details>
175
+ -->
176
+
177
+ <!--
178
+ ### Out-of-Scope Use
179
+
180
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
181
+ -->
182
+
183
+ ## Evaluation
184
+
185
+ ### Metrics
186
+
187
+ #### Semantic Similarity
188
+ * Dataset: `sts-dev`
189
+ * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
190
+
191
+ | Metric | Value |
192
+ |:-------------------|:-----------|
193
+ | pearson_cosine | 0.9625 |
194
+ | spearman_cosine | 0.9261 |
195
+ | pearson_manhattan | 0.9525 |
196
+ | spearman_manhattan | 0.9224 |
197
+ | pearson_euclidean | 0.9525 |
198
+ | spearman_euclidean | 0.9223 |
199
+ | pearson_dot | 0.9525 |
200
+ | spearman_dot | 0.9109 |
201
+ | pearson_max | 0.9625 |
202
+ | **spearman_max** | **0.9261** |
203
+
204
+ <!--
205
+ ## Bias, Risks and Limitations
206
+
207
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
208
+ -->
209
+
210
+ <!--
211
+ ### Recommendations
212
+
213
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
214
+ -->
215
+
216
+ ## Training Details
217
+
218
+ ### Training Datasets
219
+
220
+ #### Unnamed Dataset
221
+
222
+
223
+ * Size: 588,126 training samples
224
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
225
+ * Approximate statistics based on the first 1000 samples:
226
+ | | sentence_0 | sentence_1 | sentence_2 |
227
+ |:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
228
+ | type | string | string | string |
229
+ | details | <ul><li>min: 4 tokens</li><li>mean: 19.08 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 18.94 tokens</li><li>max: 122 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 14.88 tokens</li><li>max: 53 tokens</li></ul> |
230
+ * Samples:
231
+ | sentence_0 | sentence_1 | sentence_2 |
232
+ |:-----------------------------------------|:-------------------------------------------------------------|:--------------------------------------------|
233
+ | <code>바에서 호박을 곁들인 음료를 준비하는 여성 바텐더</code> | <code>바텐더가 술을 만들고 있다.</code> | <code>여자가 보드카를 마시고 있다.</code> |
234
+ | <code>두 남자가 낮에 구조물 근처를 걷고 있다.</code> | <code>아름다운 화창한 날 건물을 산책하는 두 남자.</code> | <code>남자 몇 명이 코이와 함께 연못에서 수영을 하고 있다.</code> |
235
+ | <code>두 사람이 꽃으로 둘러싸인 야외에 있다.</code> | <code>한 남자와 그의 딸이 밝은 색의 노란 꽃밭에서 사진을 찍기 위해 포즈를 취하고 있다.</code> | <code>두 남자가 농구를 하고 있다.</code> |
236
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
237
+ ```json
238
+ {
239
+ "scale": 20.0,
240
+ "similarity_fct": "cos_sim"
241
+ }
242
+ ```
243
+
244
+ #### Unnamed Dataset
245
+
246
+
247
+ * Size: 12,187 training samples
248
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
249
+ * Approximate statistics based on the first 1000 samples:
250
+ | | sentence_0 | sentence_1 | label |
251
+ |:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
252
+ | type | string | string | float |
253
+ | details | <ul><li>min: 5 tokens</li><li>mean: 20.56 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.1 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.45</li><li>max: 1.0</li></ul> |
254
+ * Samples:
255
+ | sentence_0 | sentence_1 | label |
256
+ |:------------------------------------------------------------------------|:-------------------------------------------------------------------------------|:--------------------------------|
257
+ | <code>강원영서 지역은 언제 옵니까? 소나기.</code> | <code>라니냐가 일어날 때 해수면은 몇 도 정도 하강해?</code> | <code>0.0</code> |
258
+ | <code>4월 ‘과학의 달’을 맞아 한 달 동안 언제 어디서나 과학기술을 즐길 수 있는 온라인 과학축제가 열린다.</code> | <code>4월의 "과학의 달"을 맞아, 언제 어디서나 한 달 동안 과학기술을 즐길 수 있는 온라인 과학 축제가 열릴 것입니다.</code> | <code>0.9199999999999999</code> |
259
+ | <code>호스트가 아닌 리스본 컨시어지에서 관리를 하는거라 전문적으로 관리되는 숙소입니다.</code> | <code>이 숙소는 전문적으로 관리되며, 호스트가 아닌 리스본 컨시어지가 관리합니다.</code> | <code>0.76</code> |
260
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
261
+ ```json
262
+ {
263
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
264
+ }
265
+ ```
266
+
267
+ ### Training Hyperparameters
268
+ #### Non-Default Hyperparameters
269
+
270
+ - `eval_strategy`: steps
271
+ - `per_device_train_batch_size`: 128
272
+ - `per_device_eval_batch_size`: 128
273
+ - `num_train_epochs`: 5
274
+ - `multi_dataset_batch_sampler`: round_robin
275
+
276
+ #### All Hyperparameters
277
+ <details><summary>Click to expand</summary>
278
+
279
+ - `overwrite_output_dir`: False
280
+ - `do_predict`: False
281
+ - `eval_strategy`: steps
282
+ - `prediction_loss_only`: True
283
+ - `per_device_train_batch_size`: 128
284
+ - `per_device_eval_batch_size`: 128
285
+ - `per_gpu_train_batch_size`: None
286
+ - `per_gpu_eval_batch_size`: None
287
+ - `gradient_accumulation_steps`: 1
288
+ - `eval_accumulation_steps`: None
289
+ - `learning_rate`: 5e-05
290
+ - `weight_decay`: 0.0
291
+ - `adam_beta1`: 0.9
292
+ - `adam_beta2`: 0.999
293
+ - `adam_epsilon`: 1e-08
294
+ - `max_grad_norm`: 1
295
+ - `num_train_epochs`: 5
296
+ - `max_steps`: -1
297
+ - `lr_scheduler_type`: linear
298
+ - `lr_scheduler_kwargs`: {}
299
+ - `warmup_ratio`: 0.0
300
+ - `warmup_steps`: 0
301
+ - `log_level`: passive
302
+ - `log_level_replica`: warning
303
+ - `log_on_each_node`: True
304
+ - `logging_nan_inf_filter`: True
305
+ - `save_safetensors`: True
306
+ - `save_on_each_node`: False
307
+ - `save_only_model`: False
308
+ - `restore_callback_states_from_checkpoint`: False
309
+ - `no_cuda`: False
310
+ - `use_cpu`: False
311
+ - `use_mps_device`: False
312
+ - `seed`: 42
313
+ - `data_seed`: None
314
+ - `jit_mode_eval`: False
315
+ - `use_ipex`: False
316
+ - `bf16`: False
317
+ - `fp16`: False
318
+ - `fp16_opt_level`: O1
319
+ - `half_precision_backend`: auto
320
+ - `bf16_full_eval`: False
321
+ - `fp16_full_eval`: False
322
+ - `tf32`: None
323
+ - `local_rank`: 0
324
+ - `ddp_backend`: None
325
+ - `tpu_num_cores`: None
326
+ - `tpu_metrics_debug`: False
327
+ - `debug`: []
328
+ - `dataloader_drop_last`: False
329
+ - `dataloader_num_workers`: 0
330
+ - `dataloader_prefetch_factor`: None
331
+ - `past_index`: -1
332
+ - `disable_tqdm`: False
333
+ - `remove_unused_columns`: True
334
+ - `label_names`: None
335
+ - `load_best_model_at_end`: False
336
+ - `ignore_data_skip`: False
337
+ - `fsdp`: []
338
+ - `fsdp_min_num_params`: 0
339
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
340
+ - `fsdp_transformer_layer_cls_to_wrap`: None
341
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
342
+ - `deepspeed`: None
343
+ - `label_smoothing_factor`: 0.0
344
+ - `optim`: adamw_torch
345
+ - `optim_args`: None
346
+ - `adafactor`: False
347
+ - `group_by_length`: False
348
+ - `length_column_name`: length
349
+ - `ddp_find_unused_parameters`: None
350
+ - `ddp_bucket_cap_mb`: None
351
+ - `ddp_broadcast_buffers`: False
352
+ - `dataloader_pin_memory`: True
353
+ - `dataloader_persistent_workers`: False
354
+ - `skip_memory_metrics`: True
355
+ - `use_legacy_prediction_loop`: False
356
+ - `push_to_hub`: False
357
+ - `resume_from_checkpoint`: None
358
+ - `hub_model_id`: None
359
+ - `hub_strategy`: every_save
360
+ - `hub_private_repo`: False
361
+ - `hub_always_push`: False
362
+ - `gradient_checkpointing`: False
363
+ - `gradient_checkpointing_kwargs`: None
364
+ - `include_inputs_for_metrics`: False
365
+ - `eval_do_concat_batches`: True
366
+ - `fp16_backend`: auto
367
+ - `push_to_hub_model_id`: None
368
+ - `push_to_hub_organization`: None
369
+ - `mp_parameters`:
370
+ - `auto_find_batch_size`: False
371
+ - `full_determinism`: False
372
+ - `torchdynamo`: None
373
+ - `ray_scope`: last
374
+ - `ddp_timeout`: 1800
375
+ - `torch_compile`: False
376
+ - `torch_compile_backend`: None
377
+ - `torch_compile_mode`: None
378
+ - `dispatch_batches`: None
379
+ - `split_batches`: None
380
+ - `include_tokens_per_second`: False
381
+ - `include_num_input_tokens_seen`: False
382
+ - `neftune_noise_alpha`: None
383
+ - `optim_target_modules`: None
384
+ - `batch_eval_metrics`: False
385
+ - `batch_sampler`: batch_sampler
386
+ - `multi_dataset_batch_sampler`: round_robin
387
+
388
+ </details>
389
+
390
+ ### Training Logs
391
+ | Epoch | Step | sts-dev_spearman_max |
392
+ |:------:|:----:|:--------------------:|
393
+ | 1.0052 | 193 | 0.9215 |
394
+ | 2.0052 | 386 | 0.9261 |
395
+
396
+
397
+ ### Framework Versions
398
+ - Python: 3.10.12
399
+ - Sentence Transformers: 3.0.1
400
+ - Transformers: 4.41.2
401
+ - PyTorch: 2.3.0+cu121
402
+ - Accelerate: 0.31.0
403
+ - Datasets: 2.19.2
404
+ - Tokenizers: 0.19.1
405
+
406
+ ## Citation
407
+
408
+ ### BibTeX
409
+
410
+ #### Sentence Transformers
411
+ ```bibtex
412
+ @inproceedings{reimers-2019-sentence-bert,
413
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
414
+ author = "Reimers, Nils and Gurevych, Iryna",
415
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
416
+ month = "11",
417
+ year = "2019",
418
+ publisher = "Association for Computational Linguistics",
419
+ url = "https://arxiv.org/abs/1908.10084",
420
+ }
421
+ ```
422
+
423
+ #### MultipleNegativesRankingLoss
424
+ ```bibtex
425
+ @misc{henderson2017efficient,
426
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
427
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
428
+ year={2017},
429
+ eprint={1705.00652},
430
+ archivePrefix={arXiv},
431
+ primaryClass={cs.CL}
432
+ }
433
+ ```
434
+
435
+ <!--
436
+ ## Glossary
437
+
438
+ *Clearly define terms in order to be accessible across audiences.*
439
+ -->
440
+
441
+ <!--
442
+ ## Model Card Authors
443
+
444
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
445
+ -->
446
+
447
+ <!--
448
+ ## Model Card Contact
449
+
450
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
451
+ -->
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./sentence_transformers/output/best",
3
+ "architectures": [
4
+ "RobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "gradient_checkpointing": false,
11
+ "hidden_act": "gelu",
12
+ "hidden_dropout_prob": 0.1,
13
+ "hidden_size": 768,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 3072,
16
+ "layer_norm_eps": 1e-05,
17
+ "max_position_embeddings": 514,
18
+ "model_type": "roberta",
19
+ "num_attention_heads": 12,
20
+ "num_hidden_layers": 12,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "tokenizer_class": "BertTokenizer",
24
+ "torch_dtype": "float32",
25
+ "transformers_version": "4.41.2",
26
+ "type_vocab_size": 1,
27
+ "use_cache": true,
28
+ "vocab_size": 32000
29
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.41.2",
5
+ "pytorch": "2.3.0+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b8d561088b96480addd8bd500b477fcc17925d554e64239a9396abcb30f92d67
3
+ size 442494816
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "[CLS]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "[SEP]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "[MASK]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "[PAD]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "[SEP]",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "[UNK]",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[CLS]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[PAD]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[SEP]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[UNK]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "bos_token": "[CLS]",
45
+ "clean_up_tokenization_spaces": true,
46
+ "cls_token": "[CLS]",
47
+ "do_basic_tokenize": true,
48
+ "do_lower_case": false,
49
+ "eos_token": "[SEP]",
50
+ "mask_token": "[MASK]",
51
+ "max_length": 128,
52
+ "model_max_length": 128,
53
+ "never_split": null,
54
+ "pad_to_multiple_of": null,
55
+ "pad_token": "[PAD]",
56
+ "pad_token_type_id": 0,
57
+ "padding_side": "right",
58
+ "sep_token": "[SEP]",
59
+ "stride": 0,
60
+ "strip_accents": null,
61
+ "tokenize_chinese_chars": true,
62
+ "tokenizer_class": "BertTokenizer",
63
+ "truncation_side": "right",
64
+ "truncation_strategy": "longest_first",
65
+ "unk_token": "[UNK]"
66
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff