CalebR84 commited on
Commit
98bba30
·
verified ·
1 Parent(s): 28cc1ad

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,586 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - sentence-transformers
6
+ - sentence-similarity
7
+ - feature-extraction
8
+ - generated_from_trainer
9
+ - dataset_size:100000
10
+ - loss:OnlineContrastiveLoss
11
+ base_model: sentence-transformers/stsb-distilbert-base
12
+ widget:
13
+ - source_sentence: Why does Jimmy Wales choose to reside in London instead of the
14
+ US?
15
+ sentences:
16
+ - What are the challenges in RTE?
17
+ - Why did Jimmy Wales move to London?
18
+ - What are the most common barriers that affect the effective communication of a
19
+ family?
20
+ - source_sentence: How I can speak English with fluency?
21
+ sentences:
22
+ - How can I improve my English Language?
23
+ - What are some ways to be a good storyteller?
24
+ - What are good ways to count your calories?
25
+ - source_sentence: How can I come out of my depression?
26
+ sentences:
27
+ - Will I lose my Yahoo account email when I leave AT&T just because all my Yahoo
28
+ account email has my old AT&T (pacbell.net) name?
29
+ - How did you come out of depression?
30
+ - How can I become rich by the age of 25?
31
+ - source_sentence: What would I get if I could somehow merge two, three or four nuclei
32
+ of nitrogen-15?
33
+ sentences:
34
+ - What would I get if I could merge two nuclei of argon-40?
35
+ - What is the historical relationship between the Mexican war and War of 1812? Any
36
+ thoughts?
37
+ - Do employees at Excel Trust have a good work-life balance? Does this differ across
38
+ positions and departments?
39
+ - source_sentence: In mythologies, how do mermaids reproduce?
40
+ sentences:
41
+ - How is it possible for cops to trace a lost mobile using the IMEI number even
42
+ after the SIM card has been taken out?
43
+ - I am a best friend of a girl who rejected me twice, can I still have a chance
44
+ of dating her?
45
+ - How do mermaids in myths have babies?
46
+ datasets:
47
+ - sentence-transformers/quora-duplicates
48
+ pipeline_tag: sentence-similarity
49
+ library_name: sentence-transformers
50
+ metrics:
51
+ - cosine_accuracy
52
+ - cosine_accuracy_threshold
53
+ - cosine_f1
54
+ - cosine_f1_threshold
55
+ - cosine_precision
56
+ - cosine_recall
57
+ - cosine_ap
58
+ - cosine_mcc
59
+ - average_precision
60
+ - f1
61
+ - precision
62
+ - recall
63
+ - threshold
64
+ - cosine_accuracy@1
65
+ - cosine_accuracy@3
66
+ - cosine_accuracy@5
67
+ - cosine_accuracy@10
68
+ - cosine_precision@1
69
+ - cosine_precision@3
70
+ - cosine_precision@5
71
+ - cosine_precision@10
72
+ - cosine_recall@1
73
+ - cosine_recall@3
74
+ - cosine_recall@5
75
+ - cosine_recall@10
76
+ - cosine_ndcg@10
77
+ - cosine_mrr@10
78
+ - cosine_map@100
79
+ model-index:
80
+ - name: SentenceTransformer based on sentence-transformers/stsb-distilbert-base
81
+ results:
82
+ - task:
83
+ type: binary-classification
84
+ name: Binary Classification
85
+ dataset:
86
+ name: quora duplicates
87
+ type: quora-duplicates
88
+ metrics:
89
+ - type: cosine_accuracy
90
+ value: 0.873
91
+ name: Cosine Accuracy
92
+ - type: cosine_accuracy_threshold
93
+ value: 0.815665602684021
94
+ name: Cosine Accuracy Threshold
95
+ - type: cosine_f1
96
+ value: 0.8341836734693877
97
+ name: Cosine F1
98
+ - type: cosine_f1_threshold
99
+ value: 0.8002265691757202
100
+ name: Cosine F1 Threshold
101
+ - type: cosine_precision
102
+ value: 0.8114143920595533
103
+ name: Cosine Precision
104
+ - type: cosine_recall
105
+ value: 0.8582677165354331
106
+ name: Cosine Recall
107
+ - type: cosine_ap
108
+ value: 0.9065066687277681
109
+ name: Cosine Ap
110
+ - type: cosine_mcc
111
+ value: 0.7281893615860209
112
+ name: Cosine Mcc
113
+ - task:
114
+ type: paraphrase-mining
115
+ name: Paraphrase Mining
116
+ dataset:
117
+ name: quora duplicates dev
118
+ type: quora-duplicates-dev
119
+ metrics:
120
+ - type: average_precision
121
+ value: 0.5364240829748219
122
+ name: Average Precision
123
+ - type: f1
124
+ value: 0.5459832968781071
125
+ name: F1
126
+ - type: precision
127
+ value: 0.5433094236952758
128
+ name: Precision
129
+ - type: recall
130
+ value: 0.5486836189239147
131
+ name: Recall
132
+ - type: threshold
133
+ value: 0.8692173957824707
134
+ name: Threshold
135
+ - task:
136
+ type: information-retrieval
137
+ name: Information Retrieval
138
+ dataset:
139
+ name: Unknown
140
+ type: unknown
141
+ metrics:
142
+ - type: cosine_accuracy@1
143
+ value: 0.9294
144
+ name: Cosine Accuracy@1
145
+ - type: cosine_accuracy@3
146
+ value: 0.9706
147
+ name: Cosine Accuracy@3
148
+ - type: cosine_accuracy@5
149
+ value: 0.9782
150
+ name: Cosine Accuracy@5
151
+ - type: cosine_accuracy@10
152
+ value: 0.9872
153
+ name: Cosine Accuracy@10
154
+ - type: cosine_precision@1
155
+ value: 0.9294
156
+ name: Cosine Precision@1
157
+ - type: cosine_precision@3
158
+ value: 0.4145333333333334
159
+ name: Cosine Precision@3
160
+ - type: cosine_precision@5
161
+ value: 0.26644
162
+ name: Cosine Precision@5
163
+ - type: cosine_precision@10
164
+ value: 0.14156000000000002
165
+ name: Cosine Precision@10
166
+ - type: cosine_recall@1
167
+ value: 0.8007371472808238
168
+ name: Cosine Recall@1
169
+ - type: cosine_recall@3
170
+ value: 0.9326976956997253
171
+ name: Cosine Recall@3
172
+ - type: cosine_recall@5
173
+ value: 0.9557324145037969
174
+ name: Cosine Recall@5
175
+ - type: cosine_recall@10
176
+ value: 0.9757744890011949
177
+ name: Cosine Recall@10
178
+ - type: cosine_ndcg@10
179
+ value: 0.951609899037309
180
+ name: Cosine Ndcg@10
181
+ - type: cosine_mrr@10
182
+ value: 0.9512162698412696
183
+ name: Cosine Mrr@10
184
+ - type: cosine_map@100
185
+ value: 0.939197562645401
186
+ name: Cosine Map@100
187
+ ---
188
+
189
+ # SentenceTransformer based on sentence-transformers/stsb-distilbert-base
190
+
191
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/stsb-distilbert-base](https://huggingface.co/sentence-transformers/stsb-distilbert-base) on the [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
192
+
193
+ ## Model Details
194
+
195
+ ### Model Description
196
+ - **Model Type:** Sentence Transformer
197
+ - **Base model:** [sentence-transformers/stsb-distilbert-base](https://huggingface.co/sentence-transformers/stsb-distilbert-base) <!-- at revision a560fa5fec90547a51a4a41a392d4aef93b49f16 -->
198
+ - **Maximum Sequence Length:** 128 tokens
199
+ - **Output Dimensionality:** 768 dimensions
200
+ - **Similarity Function:** Cosine Similarity
201
+ - **Training Dataset:**
202
+ - [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates)
203
+ - **Language:** en
204
+ <!-- - **License:** Unknown -->
205
+
206
+ ### Model Sources
207
+
208
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
209
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
210
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
211
+
212
+ ### Full Model Architecture
213
+
214
+ ```
215
+ SentenceTransformer(
216
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
217
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
218
+ )
219
+ ```
220
+
221
+ ## Usage
222
+
223
+ ### Direct Usage (Sentence Transformers)
224
+
225
+ First install the Sentence Transformers library:
226
+
227
+ ```bash
228
+ pip install -U sentence-transformers
229
+ ```
230
+
231
+ Then you can load this model and run inference.
232
+ ```python
233
+ from sentence_transformers import SentenceTransformer
234
+
235
+ # Download from the 🤗 Hub
236
+ model = SentenceTransformer("CalebR84/stsb-distilbert-base-ocl")
237
+ # Run inference
238
+ sentences = [
239
+ 'In mythologies, how do mermaids reproduce?',
240
+ 'How do mermaids in myths have babies?',
241
+ 'I am a best friend of a girl who rejected me twice, can I still have a chance of dating her?',
242
+ ]
243
+ embeddings = model.encode(sentences)
244
+ print(embeddings.shape)
245
+ # [3, 768]
246
+
247
+ # Get the similarity scores for the embeddings
248
+ similarities = model.similarity(embeddings, embeddings)
249
+ print(similarities.shape)
250
+ # [3, 3]
251
+ ```
252
+
253
+ <!--
254
+ ### Direct Usage (Transformers)
255
+
256
+ <details><summary>Click to see the direct usage in Transformers</summary>
257
+
258
+ </details>
259
+ -->
260
+
261
+ <!--
262
+ ### Downstream Usage (Sentence Transformers)
263
+
264
+ You can finetune this model on your own dataset.
265
+
266
+ <details><summary>Click to expand</summary>
267
+
268
+ </details>
269
+ -->
270
+
271
+ <!--
272
+ ### Out-of-Scope Use
273
+
274
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
275
+ -->
276
+
277
+ ## Evaluation
278
+
279
+ ### Metrics
280
+
281
+ #### Binary Classification
282
+
283
+ * Dataset: `quora-duplicates`
284
+ * Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
285
+
286
+ | Metric | Value |
287
+ |:--------------------------|:-----------|
288
+ | cosine_accuracy | 0.873 |
289
+ | cosine_accuracy_threshold | 0.8157 |
290
+ | cosine_f1 | 0.8342 |
291
+ | cosine_f1_threshold | 0.8002 |
292
+ | cosine_precision | 0.8114 |
293
+ | cosine_recall | 0.8583 |
294
+ | **cosine_ap** | **0.9065** |
295
+ | cosine_mcc | 0.7282 |
296
+
297
+ #### Paraphrase Mining
298
+
299
+ * Dataset: `quora-duplicates-dev`
300
+ * Evaluated with [<code>ParaphraseMiningEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.ParaphraseMiningEvaluator) with these parameters:
301
+ ```json
302
+ {'add_transitive_closure': <function ParaphraseMiningEvaluator.add_transitive_closure at 0x000001E7E3966700>, 'max_pairs': 500000, 'top_k': 100}
303
+ ```
304
+
305
+ | Metric | Value |
306
+ |:----------------------|:-----------|
307
+ | **average_precision** | **0.5364** |
308
+ | f1 | 0.546 |
309
+ | precision | 0.5433 |
310
+ | recall | 0.5487 |
311
+ | threshold | 0.8692 |
312
+
313
+ #### Information Retrieval
314
+
315
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
316
+
317
+ | Metric | Value |
318
+ |:--------------------|:-----------|
319
+ | cosine_accuracy@1 | 0.9294 |
320
+ | cosine_accuracy@3 | 0.9706 |
321
+ | cosine_accuracy@5 | 0.9782 |
322
+ | cosine_accuracy@10 | 0.9872 |
323
+ | cosine_precision@1 | 0.9294 |
324
+ | cosine_precision@3 | 0.4145 |
325
+ | cosine_precision@5 | 0.2664 |
326
+ | cosine_precision@10 | 0.1416 |
327
+ | cosine_recall@1 | 0.8007 |
328
+ | cosine_recall@3 | 0.9327 |
329
+ | cosine_recall@5 | 0.9557 |
330
+ | cosine_recall@10 | 0.9758 |
331
+ | **cosine_ndcg@10** | **0.9516** |
332
+ | cosine_mrr@10 | 0.9512 |
333
+ | cosine_map@100 | 0.9392 |
334
+
335
+ <!--
336
+ ## Bias, Risks and Limitations
337
+
338
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
339
+ -->
340
+
341
+ <!--
342
+ ### Recommendations
343
+
344
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
345
+ -->
346
+
347
+ ## Training Details
348
+
349
+ ### Training Dataset
350
+
351
+ #### quora-duplicates
352
+
353
+ * Dataset: [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
354
+ * Size: 100,000 training samples
355
+ * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
356
+ * Approximate statistics based on the first 1000 samples:
357
+ | | sentence1 | sentence2 | label |
358
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
359
+ | type | string | string | int |
360
+ | details | <ul><li>min: 4 tokens</li><li>mean: 15.28 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.86 tokens</li><li>max: 81 tokens</li></ul> | <ul><li>0: ~59.90%</li><li>1: ~40.10%</li></ul> |
361
+ * Samples:
362
+ | sentence1 | sentence2 | label |
363
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
364
+ | <code>How can I change my profile name in Facebook?</code> | <code>How do I change my Facebook profile name?</code> | <code>1</code> |
365
+ | <code>How is the LNMIIT, Jaipur?</code> | <code>Where is LNMIIT located in Jaipur?</code> | <code>0</code> |
366
+ | <code>I moved to the U.S. on my senior year and have aprox. 2 months to get ready for the SAT (or ACT). What would be the best strategy for me?</code> | <code>I have been living abroad for almost 10 years and am about to move back to the U.S. (Bay Area to be exact). What advice would you give me regarding a job search, a place to live, healthcare, etc.?</code> | <code>0</code> |
367
+ * Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
368
+
369
+ ### Evaluation Dataset
370
+
371
+ #### quora-duplicates
372
+
373
+ * Dataset: [quora-duplicates](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
374
+ * Size: 1,000 evaluation samples
375
+ * Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
376
+ * Approximate statistics based on the first 1000 samples:
377
+ | | sentence1 | sentence2 | label |
378
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
379
+ | type | string | string | int |
380
+ | details | <ul><li>min: 3 tokens</li><li>mean: 15.27 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.94 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>0: ~61.90%</li><li>1: ~38.10%</li></ul> |
381
+ * Samples:
382
+ | sentence1 | sentence2 | label |
383
+ |:----------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------|:---------------|
384
+ | <code>What is utility?</code> | <code>What is utility programs?</code> | <code>0</code> |
385
+ | <code>Can you describe the process from the time you type in a website's URL to it finishing loading on your screen?</code> | <code>What are the series of steps that happen when an URL is requested from the address field of a browser?</code> | <code>0</code> |
386
+ | <code>What were the motives behind the 2016 Orlando nightclub shooting?</code> | <code>What motivated the shooter in the June 2016 Orlando nightclub shooting?</code> | <code>1</code> |
387
+ * Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
388
+
389
+ ### Training Hyperparameters
390
+ #### Non-Default Hyperparameters
391
+
392
+ - `eval_strategy`: steps
393
+ - `per_device_train_batch_size`: 64
394
+ - `per_device_eval_batch_size`: 64
395
+ - `num_train_epochs`: 1
396
+ - `warmup_ratio`: 0.1
397
+ - `fp16`: True
398
+ - `batch_sampler`: no_duplicates
399
+
400
+ #### All Hyperparameters
401
+ <details><summary>Click to expand</summary>
402
+
403
+ - `overwrite_output_dir`: False
404
+ - `do_predict`: False
405
+ - `eval_strategy`: steps
406
+ - `prediction_loss_only`: True
407
+ - `per_device_train_batch_size`: 64
408
+ - `per_device_eval_batch_size`: 64
409
+ - `per_gpu_train_batch_size`: None
410
+ - `per_gpu_eval_batch_size`: None
411
+ - `gradient_accumulation_steps`: 1
412
+ - `eval_accumulation_steps`: None
413
+ - `torch_empty_cache_steps`: None
414
+ - `learning_rate`: 5e-05
415
+ - `weight_decay`: 0.0
416
+ - `adam_beta1`: 0.9
417
+ - `adam_beta2`: 0.999
418
+ - `adam_epsilon`: 1e-08
419
+ - `max_grad_norm`: 1.0
420
+ - `num_train_epochs`: 1
421
+ - `max_steps`: -1
422
+ - `lr_scheduler_type`: linear
423
+ - `lr_scheduler_kwargs`: {}
424
+ - `warmup_ratio`: 0.1
425
+ - `warmup_steps`: 0
426
+ - `log_level`: passive
427
+ - `log_level_replica`: warning
428
+ - `log_on_each_node`: True
429
+ - `logging_nan_inf_filter`: True
430
+ - `save_safetensors`: True
431
+ - `save_on_each_node`: False
432
+ - `save_only_model`: False
433
+ - `restore_callback_states_from_checkpoint`: False
434
+ - `no_cuda`: False
435
+ - `use_cpu`: False
436
+ - `use_mps_device`: False
437
+ - `seed`: 42
438
+ - `data_seed`: None
439
+ - `jit_mode_eval`: False
440
+ - `use_ipex`: False
441
+ - `bf16`: False
442
+ - `fp16`: True
443
+ - `fp16_opt_level`: O1
444
+ - `half_precision_backend`: auto
445
+ - `bf16_full_eval`: False
446
+ - `fp16_full_eval`: False
447
+ - `tf32`: None
448
+ - `local_rank`: 0
449
+ - `ddp_backend`: None
450
+ - `tpu_num_cores`: None
451
+ - `tpu_metrics_debug`: False
452
+ - `debug`: []
453
+ - `dataloader_drop_last`: False
454
+ - `dataloader_num_workers`: 0
455
+ - `dataloader_prefetch_factor`: None
456
+ - `past_index`: -1
457
+ - `disable_tqdm`: False
458
+ - `remove_unused_columns`: True
459
+ - `label_names`: None
460
+ - `load_best_model_at_end`: False
461
+ - `ignore_data_skip`: False
462
+ - `fsdp`: []
463
+ - `fsdp_min_num_params`: 0
464
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
465
+ - `tp_size`: 0
466
+ - `fsdp_transformer_layer_cls_to_wrap`: None
467
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
468
+ - `deepspeed`: None
469
+ - `label_smoothing_factor`: 0.0
470
+ - `optim`: adamw_torch
471
+ - `optim_args`: None
472
+ - `adafactor`: False
473
+ - `group_by_length`: False
474
+ - `length_column_name`: length
475
+ - `ddp_find_unused_parameters`: None
476
+ - `ddp_bucket_cap_mb`: None
477
+ - `ddp_broadcast_buffers`: False
478
+ - `dataloader_pin_memory`: True
479
+ - `dataloader_persistent_workers`: False
480
+ - `skip_memory_metrics`: True
481
+ - `use_legacy_prediction_loop`: False
482
+ - `push_to_hub`: False
483
+ - `resume_from_checkpoint`: None
484
+ - `hub_model_id`: None
485
+ - `hub_strategy`: every_save
486
+ - `hub_private_repo`: None
487
+ - `hub_always_push`: False
488
+ - `gradient_checkpointing`: False
489
+ - `gradient_checkpointing_kwargs`: None
490
+ - `include_inputs_for_metrics`: False
491
+ - `include_for_metrics`: []
492
+ - `eval_do_concat_batches`: True
493
+ - `fp16_backend`: auto
494
+ - `push_to_hub_model_id`: None
495
+ - `push_to_hub_organization`: None
496
+ - `mp_parameters`:
497
+ - `auto_find_batch_size`: False
498
+ - `full_determinism`: False
499
+ - `torchdynamo`: None
500
+ - `ray_scope`: last
501
+ - `ddp_timeout`: 1800
502
+ - `torch_compile`: False
503
+ - `torch_compile_backend`: None
504
+ - `torch_compile_mode`: None
505
+ - `include_tokens_per_second`: False
506
+ - `include_num_input_tokens_seen`: False
507
+ - `neftune_noise_alpha`: None
508
+ - `optim_target_modules`: None
509
+ - `batch_eval_metrics`: False
510
+ - `eval_on_start`: False
511
+ - `use_liger_kernel`: False
512
+ - `eval_use_gather_object`: False
513
+ - `average_tokens_across_devices`: False
514
+ - `prompts`: None
515
+ - `batch_sampler`: no_duplicates
516
+ - `multi_dataset_batch_sampler`: proportional
517
+
518
+ </details>
519
+
520
+ ### Training Logs
521
+ | Epoch | Step | Training Loss | Validation Loss | quora-duplicates_cosine_ap | quora-duplicates-dev_average_precision | cosine_ndcg@10 |
522
+ |:------:|:----:|:-------------:|:---------------:|:--------------------------:|:--------------------------------------:|:--------------:|
523
+ | 0 | 0 | - | - | 0.7503 | 0.4200 | 0.9401 |
524
+ | 0.0640 | 100 | 2.4538 | - | - | - | - |
525
+ | 0.1280 | 200 | 2.1419 | - | - | - | - |
526
+ | 0.1599 | 250 | - | 1.9259 | 0.8580 | 0.4437 | 0.9350 |
527
+ | 0.1919 | 300 | 2.0272 | - | - | - | - |
528
+ | 0.2559 | 400 | 1.9581 | - | - | - | - |
529
+ | 0.3199 | 500 | 1.7846 | 1.8828 | 0.8845 | 0.4698 | 0.9431 |
530
+ | 0.3839 | 600 | 1.8462 | - | - | - | - |
531
+ | 0.4479 | 700 | 1.8053 | - | - | - | - |
532
+ | 0.4798 | 750 | - | 1.6727 | 0.8933 | 0.5039 | 0.9468 |
533
+ | 0.5118 | 800 | 1.7161 | - | - | - | - |
534
+ | 0.5758 | 900 | 1.6574 | - | - | - | - |
535
+ | 0.6398 | 1000 | 1.6719 | 1.5868 | 0.9009 | 0.5152 | 0.9471 |
536
+ | 0.7038 | 1100 | 1.693 | - | - | - | - |
537
+ | 0.7678 | 1200 | 1.6622 | - | - | - | - |
538
+ | 0.7997 | 1250 | - | 1.5453 | 0.9047 | 0.5257 | 0.9462 |
539
+ | 0.8317 | 1300 | 1.6129 | - | - | - | - |
540
+ | 0.8957 | 1400 | 1.5736 | - | - | - | - |
541
+ | 0.9597 | 1500 | 1.6402 | 1.5463 | 0.9065 | 0.5364 | 0.9516 |
542
+
543
+
544
+ ### Framework Versions
545
+ - Python: 3.12.9
546
+ - Sentence Transformers: 4.1.0
547
+ - Transformers: 4.51.3
548
+ - PyTorch: 2.7.0+cu126
549
+ - Accelerate: 1.7.0
550
+ - Datasets: 3.6.0
551
+ - Tokenizers: 0.21.1
552
+
553
+ ## Citation
554
+
555
+ ### BibTeX
556
+
557
+ #### Sentence Transformers
558
+ ```bibtex
559
+ @inproceedings{reimers-2019-sentence-bert,
560
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
561
+ author = "Reimers, Nils and Gurevych, Iryna",
562
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
563
+ month = "11",
564
+ year = "2019",
565
+ publisher = "Association for Computational Linguistics",
566
+ url = "https://arxiv.org/abs/1908.10084",
567
+ }
568
+ ```
569
+
570
+ <!--
571
+ ## Glossary
572
+
573
+ *Clearly define terms in order to be accessible across audiences.*
574
+ -->
575
+
576
+ <!--
577
+ ## Model Card Authors
578
+
579
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
580
+ -->
581
+
582
+ <!--
583
+ ## Model Card Contact
584
+
585
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
586
+ -->
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation": "gelu",
3
+ "architectures": [
4
+ "DistilBertModel"
5
+ ],
6
+ "attention_dropout": 0.1,
7
+ "dim": 768,
8
+ "dropout": 0.1,
9
+ "hidden_dim": 3072,
10
+ "initializer_range": 0.02,
11
+ "max_position_embeddings": 512,
12
+ "model_type": "distilbert",
13
+ "n_heads": 12,
14
+ "n_layers": 6,
15
+ "pad_token_id": 0,
16
+ "qa_dropout": 0.1,
17
+ "seq_classif_dropout": 0.2,
18
+ "sinusoidal_pos_embds": false,
19
+ "tie_weights_": true,
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.51.3",
22
+ "vocab_size": 30522
23
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.51.3",
5
+ "pytorch": "2.7.0+cu126"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc141f489bcb457d59f9a97f3271f6df9fda7b32f4ffef45ea9c559a72f4fc83
3
+ size 265462608
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 128,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "full_tokenizer_file": null,
50
+ "mask_token": "[MASK]",
51
+ "model_max_length": 128,
52
+ "never_split": null,
53
+ "pad_token": "[PAD]",
54
+ "sep_token": "[SEP]",
55
+ "strip_accents": null,
56
+ "tokenize_chinese_chars": true,
57
+ "tokenizer_class": "DistilBertTokenizer",
58
+ "unk_token": "[UNK]"
59
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff