File size: 34,107 Bytes
dca2e44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
---

language:
- en
tags:
- sentence-transformers
- cross-encoder
- generated_from_trainer
- dataset_size:78704
- loss:ListMLELoss
base_model: microsoft/MiniLM-L12-H384-uncased
datasets:
- microsoft/ms_marco
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
co2_eq_emissions:
  emissions: 95.02104960997458
  energy_consumed: 0.24445732105822607
  source: codecarbon
  training_type: fine-tuning
  on_cloud: false
  cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
  ram_total_size: 31.777088165283203
  hours_used: 0.917
  hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: CrossEncoder based on microsoft/MiniLM-L12-H384-uncased
  results:
  - task:
      type: cross-encoder-reranking
      name: Cross Encoder Reranking
    dataset:
      name: NanoMSMARCO R100
      type: NanoMSMARCO_R100
    metrics:
    - type: map
      value: 0.4975
      name: Map
    - type: mrr@10
      value: 0.4843
      name: Mrr@10
    - type: ndcg@10
      value: 0.5506
      name: Ndcg@10
  - task:
      type: cross-encoder-reranking
      name: Cross Encoder Reranking
    dataset:
      name: NanoNFCorpus R100
      type: NanoNFCorpus_R100
    metrics:
    - type: map
      value: 0.3252
      name: Map
    - type: mrr@10
      value: 0.5679
      name: Mrr@10
    - type: ndcg@10
      value: 0.3756
      name: Ndcg@10
  - task:
      type: cross-encoder-reranking
      name: Cross Encoder Reranking
    dataset:
      name: NanoNQ R100
      type: NanoNQ_R100
    metrics:
    - type: map
      value: 0.5857
      name: Map
    - type: mrr@10
      value: 0.5922
      name: Mrr@10
    - type: ndcg@10
      value: 0.657
      name: Ndcg@10
  - task:
      type: cross-encoder-nano-beir
      name: Cross Encoder Nano BEIR
    dataset:
      name: NanoBEIR R100 mean
      type: NanoBEIR_R100_mean
    metrics:
    - type: map
      value: 0.4695
      name: Map
    - type: mrr@10
      value: 0.5481
      name: Mrr@10
    - type: ndcg@10
      value: 0.5277
      name: Ndcg@10
---


# CrossEncoder based on microsoft/MiniLM-L12-H384-uncased

This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.

## Model Details

### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) <!-- at revision 44acabbec0ef496f6dbc93adadea57f376b7c0ec -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
    - [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco)
- **Language:** en
<!-- - **License:** Unknown -->

### Model Sources

- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)

## Usage

### Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

```bash

pip install -U sentence-transformers

```

Then you can load this model and run inference.
```python

from sentence_transformers import CrossEncoder



# Download from the 🤗 Hub

model = CrossEncoder("tomaarsen/reranker-msmarco-v1.1-MiniLM-L12-H384-uncased-plistmle")

# Get scores for pairs of texts

pairs = [

    ['How many calories in an egg', 'There are on average between 55 and 80 calories in an egg depending on its size.'],

    ['How many calories in an egg', 'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.'],

    ['How many calories in an egg', 'Most of the calories in an egg come from the yellow yolk in the center.'],

]

scores = model.predict(pairs)

print(scores.shape)

# (3,)



# Or rank different texts based on similarity to a single text

ranks = model.rank(

    'How many calories in an egg',

    [

        'There are on average between 55 and 80 calories in an egg depending on its size.',

        'Egg whites are very low in calories, have no fat, no cholesterol, and are loaded with protein.',

        'Most of the calories in an egg come from the yellow yolk in the center.',

    ]

)

# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

```

<!--
### Direct Usage (Transformers)

<details><summary>Click to see the direct usage in Transformers</summary>

</details>
-->

<!--
### Downstream Usage (Sentence Transformers)

You can finetune this model on your own dataset.

<details><summary>Click to expand</summary>

</details>
-->

<!--
### Out-of-Scope Use

*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->

## Evaluation

### Metrics

#### Cross Encoder Reranking

* Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
  ```json

  {

      "at_k": 10,

      "always_rerank_positives": true

  }

  ```

| Metric      | NanoMSMARCO_R100     | NanoNFCorpus_R100    | NanoNQ_R100          |

|:------------|:---------------------|:---------------------|:---------------------|

| map         | 0.4975 (+0.0079)     | 0.3252 (+0.0642)     | 0.5857 (+0.1661)     |

| mrr@10      | 0.4843 (+0.0068)     | 0.5679 (+0.0681)     | 0.5922 (+0.1655)     |

| **ndcg@10** | **0.5506 (+0.0102)** | **0.3756 (+0.0505)** | **0.6570 (+0.1563)** |



#### Cross Encoder Nano BEIR



* Dataset: `NanoBEIR_R100_mean`

* Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:

  ```json

  {

      "dataset_names": [
          "msmarco",

          "nfcorpus",

          "nq"

      ],

      "rerank_k": 100,

      "at_k": 10,

      "always_rerank_positives": true

  }

  ```


| Metric      | Value                |
|:------------|:---------------------|
| map         | 0.4695 (+0.0794)     |
| mrr@10      | 0.5481 (+0.0801)     |
| **ndcg@10** | **0.5277 (+0.0724)** |

<!--
## Bias, Risks and Limitations

*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->

<!--
### Recommendations

*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->

## Training Details

### Training Dataset

#### ms_marco



* Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)

* Size: 78,704 training samples

* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>

* Approximate statistics based on the first 1000 samples:

  |         | query                                                                                          | docs                                                                                   | labels                                                                                 |

  |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|

  | type    | string                                                                                         | list                                                                                   | list                                                                                   |

  | details | <ul><li>min: 9 characters</li><li>mean: 33.73 characters</li><li>max: 119 characters</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 2 elements</li><li>mean: 6.00 elements</li><li>max: 10 elements</li></ul> |

* Samples:

  | query                                                                  | docs                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        | labels                            |

  |:-----------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|

  | <code>what to avoid during early pregnancy</code>                      | <code>['Although caffeine does not come under the category of foods to avoid during early pregnancy, pregnant women are advised to limit their caffeine consumption. Caffeine can be found in tea, coffee, soft drinks, chocolate etc.', 'Learn what foods to eat and what to avoid during pregnancy to ensure a healthy environment for your unborn baby! As a concerned parent, you want to do everything possible to ensure the well being and safety of your baby.', 'To stay safe, also avoid these foods during your pregnancy. Meats. 1  Cold cuts, deli meats, hot dogs, and other ready-to-eat meats. ( 2 You can safely eat these if they are heated to steaming and served hot.). 3  Pre-stuffed, fresh, turkey or chicken. 4  Steak tartare or any raw meat. 5  Rare cuts of meat and undercooked meats.', 'Raw and undercooked meat is another among foods to avoid during first trimester. Make sure that the meat is well cooked and consume while it is still hot. It would be good to avoid processed meat since pregnant wom...</code>                                                                                                                                                                                                                                                                                                                    | <code>[1, 0, 0, 0, 0, ...]</code> |

  | <code>where is bells creek</code>                                      | <code>['Simpsonville, SC Real Estate. #facebook# Bells Creek is a small neighborhood in Simpsonville, SC, close to Bells Crossing Elementary. Located near Woodruff Rd., I-85 and I-385, the upscale Bells Creek homes are typically on large lots with mature trees. Bells Creek amenities include a pool and cabana. Bells Creek real estate prices average $220,000.', '#facebook# Bells Creek is a small neighborhood in Simpsonville, SC, close to Bells Crossing Elementary. Located near Woodruff Rd., I-85 and I-385, the upscale Bells Creek homes are typically on large lots with mature trees. Bells Creek amenities include a pool and cabana. Bells Creek real estate prices average $220,000.', "Welcome to The Overlook at Bells Creek, an exclusive Eastwood Homes' Greenville area community only minutes away from Five Forks in Simpsonville, SC.", 'Property Details. Property details for 213 Bells Creek Dr, Simpsonville, SC 29681. This Single Family Home is located at Bells Creek in Simpsonville, South Carolina. The home provides approximately 2074 square feet of living space. This property features 4 bedrooms. There are 3 bathrooms. 213 Bells Creek Dr, Simpsonville, SC 29681 falls within the Greenville county lines. This home sold for $180,000 on Dec 17, 2014. Similar homes in the area are priced around $187,091.']</code> | <code>[1, 1, 0, 0]</code>         |

  | <code>how long does it take to hatch geese eggs in an incubator</code> | <code>['Geese take 31 days of incubation for a goose egg to hatch. Whether underneath its parents, or in an incubator, the incubation time is the same.', 'While chicken eggs take 21 days, for example, geese can take between 30 and 35 days and need a higher humidity level. Goslings are also more likely to hatch if the eggs are sprayed with water every day between days 6 and about 25, whereas chicken eggs need to be kept in humid conditions, but dry.', 'Incubation Duration. Incubating goose eggs should be done for a period of about 28 days for smaller breeds, and up to 35 days for larger breeds before pipping begins. Once goose eggs begin hatching, the process can take up to three days before they are completely out of their shell.', "It takes 21 days to incubate the egg. the 21st day is the hatching day. if the eggs are mail order don't count the day the eggs arrive if the temperature is below 55 degrees f … . on the 21st day the eggs will hatch at all different times.", '5. Wait until the eg...</code>                                                                                                                                                                                                                                                                                                                    | <code>[1, 0, 0, 0, 0, ...]</code> |

* Loss: [<code>ListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#listmleloss) with these parameters:

  ```json

  {

      "lambda_weight": "sentence_transformers.cross_encoder.losses.ListMLELoss.ListMLELambdaWeight",
      "activation_fct": "torch.nn.modules.linear.Identity",

      "mini_batch_size": 16,

      "respect_input_order": true

  }

  ```


### Evaluation Dataset

#### ms_marco



* Dataset: [ms_marco](https://huggingface.co/datasets/microsoft/ms_marco) at [a47ee7a](https://huggingface.co/datasets/microsoft/ms_marco/tree/a47ee7aae8d7d466ba15f9f0bfac3b3681087b3a)

* Size: 1,000 evaluation samples

* Columns: <code>query</code>, <code>docs</code>, and <code>labels</code>

* Approximate statistics based on the first 1000 samples:

  |         | query                                                                                          | docs                                                                                   | labels                                                                                 |

  |:--------|:-----------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------|

  | type    | string                                                                                         | list                                                                                   | list                                                                                   |

  | details | <ul><li>min: 12 characters</li><li>mean: 33.41 characters</li><li>max: 94 characters</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> | <ul><li>min: 3 elements</li><li>mean: 6.50 elements</li><li>max: 10 elements</li></ul> |

* Samples:

  | query                                                             | docs                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | labels                            |

  |:------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|

  | <code>who does glenn beck want for speaker of the house</code>    | <code>["|. Nebraska Senator Ben Sasse spoke to Glenn Beck Wednesday about Sasse's idea to make Arthur Brooks, head of the American Enterprise Institute, the next Speaker of the House. There’s nothing in the Constitution that requires you to have a Speaker who is an elected member of Congress, Sasse said.", 'And the guy that kept coming to my mind as I was watching Sunday Night Football is Arthur Brooks, the head of AEI. And so I think that the House Republicans should think about going outside the box. There’s nothing in the Constitution that requires you to have a Speaker who is an elected member of Congress.', 'Share on Facebook Share on Twitter. Conservative radio host Glenn Beck went off on Republican leadership after the House passed a budget compromise Thursday, calling House Speaker John Boehner “worthless” and Senate Minority Leader Mitch McConnell a “liar.”.', '“I think John Boehner is one of the prime examples of worthless, worthless Republicans,” Beck said Thursday on Mark Levin’s...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |

  | <code>how long do you have to keep former employee records</code> | <code>['Employee Contracts. If you have employment contracts with your employees, then you should maintain these contracts for at least 10 years. According to Financial Web, you should “err on the side of caution” and maintain employee records for a longer period than you think you may need, in case a legal issue arises.', 'Small business expert Rieva Lesonsky suggests you keep employment records for a minimum of two years and for up to seven years. She says that most states have a two-year statute of limitations on lawsuit filings by former employees, so you want to make sure you have documents at hand if this occurs.', 'Since employees may come and go, you may wonder how long you should hang on to the employee records. The Internal Revenue Service (IRS) weighs in on records pertaining to employee taxes, such as payroll, but the other records depend on what types of records you have for employees.', 'Effective January 1, 2013, California law provides that current and former employees (or a ...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |

  | <code>what year was velcro invented</code>                        | <code>['Velcro was invented by George de Mestral a Swiss electrical engineer in 1941. This idea of inventing Velcro came to him when one day he returned after a walk from the hills and found cockleburs stuck to his clothes and his dog’s fur. George noticed its natural hook and loop quality and started making a fabric fastener on the same quality.', 'Velcro, which was invented by a Swiss Electrical Engineer George de Mestral, comprises two layers and when both these sides are hard-pressed together, they assist in fixing two surfaces. The thought to invent Velcro hits Mestral’s mind in the year 1941 after coming back from a hunting tour with his dog. ', 'In 1958, de Mestral filed for a patent application for his hook-and-loop fastener in Switzerland, which was granted in 1961. The term Velcro is a registered trademark of Velcro Industries B.V. Velcro Industries is a privately held worldwide corporation manufacturing consumer and industrial products. Among them is a series of mechanical-based f...</code> | <code>[1, 0, 0, 0, 0, ...]</code> |

* Loss: [<code>ListMLELoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#listmleloss) with these parameters:

  ```json

  {

      "lambda_weight": "sentence_transformers.cross_encoder.losses.ListMLELoss.ListMLELambdaWeight",
      "activation_fct": "torch.nn.modules.linear.Identity",

      "mini_batch_size": 16,

      "respect_input_order": true

  }

  ```


### Training Hyperparameters
#### Non-Default Hyperparameters

- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `load_best_model_at_end`: True

#### All Hyperparameters
<details><summary>Click to expand</summary>

- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}

- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch

- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save

- `hub_private_repo`: None

- `hub_always_push`: False

- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`: 
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler

- `multi_dataset_batch_sampler`: proportional

</details>

### Training Logs
| Epoch      | Step     | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10  | NanoBEIR_R100_mean_ndcg@10 |

|:----------:|:--------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|

| -1         | -1       | -             | -               | 0.0344 (-0.5060)         | 0.2073 (-0.1178)          | 0.0336 (-0.4671)     | 0.0918 (-0.3636)           |

| 0.0002     | 1        | 1412.3083     | -               | -                        | -                         | -                    | -                          |

| 0.0508     | 250      | 887.7485      | -               | -                        | -                         | -                    | -                          |

| 0.1016     | 500      | 853.8898      | 903.5635        | 0.2242 (-0.3163)         | 0.2467 (-0.0783)          | 0.3585 (-0.1421)     | 0.2765 (-0.1789)           |

| 0.1525     | 750      | 867.3723      | -               | -                        | -                         | -                    | -                          |

| 0.2033     | 1000     | 851.3223      | 880.1996        | 0.4790 (-0.0614)         | 0.3435 (+0.0184)          | 0.5945 (+0.0938)     | 0.4723 (+0.0170)           |

| 0.2541     | 1250     | 840.5654      | -               | -                        | -                         | -                    | -                          |

| 0.3049     | 1500     | 836.1076      | 872.8075        | 0.5189 (-0.0216)         | 0.3394 (+0.0143)          | 0.6097 (+0.1091)     | 0.4893 (+0.0339)           |

| 0.3558     | 1750     | 853.3524      | -               | -                        | -                         | -                    | -                          |

| 0.4066     | 2000     | 859.1896      | 872.7851        | 0.5453 (+0.0049)         | 0.3638 (+0.0387)          | 0.6322 (+0.1315)     | 0.5137 (+0.0584)           |

| 0.4574     | 2250     | 816.2849      | -               | -                        | -                         | -                    | -                          |

| 0.5082     | 2500     | 832.0728      | 866.5376        | 0.5428 (+0.0023)         | 0.3737 (+0.0487)          | 0.6384 (+0.1378)     | 0.5183 (+0.0629)           |

| 0.5591     | 2750     | 825.9285      | -               | -                        | -                         | -                    | -                          |

| 0.6099     | 3000     | 809.4326      | 865.0468        | 0.5319 (-0.0085)         | 0.3488 (+0.0238)          | 0.6320 (+0.1313)     | 0.5042 (+0.0489)           |

| 0.6607     | 3250     | 807.3669      | -               | -                        | -                         | -                    | -                          |

| 0.7115     | 3500     | 828.0153      | 869.0601        | 0.5479 (+0.0075)         | 0.3690 (+0.0440)          | 0.6495 (+0.1488)     | 0.5221 (+0.0668)           |

| 0.7624     | 3750     | 841.2574      | -               | -                        | -                         | -                    | -                          |

| 0.8132     | 4000     | 814.0583      | 865.1564        | 0.5406 (+0.0001)         | 0.3571 (+0.0320)          | 0.6519 (+0.1513)     | 0.5165 (+0.0612)           |

| 0.8640     | 4250     | 814.6952      | -               | -                        | -                         | -                    | -                          |

| **0.9148** | **4500** | **825.9762**  | **864.4775**    | **0.5506 (+0.0102)**     | **0.3756 (+0.0505)**      | **0.6570 (+0.1563)** | **0.5277 (+0.0724)**       |

| 0.9656     | 4750     | 821.2723      | -               | -                        | -                         | -                    | -                          |

| -1         | -1       | -             | -               | 0.5506 (+0.0102)         | 0.3756 (+0.0505)          | 0.6570 (+0.1563)     | 0.5277 (+0.0724)           |



* The bold row denotes the saved checkpoint.



### Environmental Impact

Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).

- **Energy Consumed**: 0.244 kWh

- **Carbon Emitted**: 0.095 kg of CO2

- **Hours Used**: 0.917 hours



### Training Hardware

- **On Cloud**: No

- **GPU Model**: 1 x NVIDIA GeForce RTX 3090

- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K

- **RAM Size**: 31.78 GB



### Framework Versions

- Python: 3.11.6

- Sentence Transformers: 3.5.0.dev0

- Transformers: 4.49.0

- PyTorch: 2.6.0+cu124

- Accelerate: 1.5.1

- Datasets: 3.3.2

- Tokenizers: 0.21.0



## Citation



### BibTeX



#### Sentence Transformers

```bibtex

@inproceedings{reimers-2019-sentence-bert,

    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",

    author = "Reimers, Nils and Gurevych, Iryna",

    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",

    month = "11",

    year = "2019",

    publisher = "Association for Computational Linguistics",

    url = "https://arxiv.org/abs/1908.10084",

}

```



#### ListMLELoss

```bibtex

@inproceedings{lan2013position,

    title={Position-aware ListMLE: a sequential learning process for ranking},

    author={Lan, Yanyan and Guo, Jiafeng and Cheng, Xueqi and Liu, Tie-Yan},

    booktitle={Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence},

    pages={333--342},

    year={2013}

}

```



<!--

## Glossary



*Clearly define terms in order to be accessible across audiences.*

-->



<!--

## Model Card Authors



*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*

-->



<!--

## Model Card Contact



*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*

-->