File size: 28,591 Bytes
0920e8c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: How does the size of DeepSeek v3 compare to Meta’s Llama 31 405B
    model?
  sentences:
  - 'Terminology aside, I remain skeptical as to their utility based, once again,
    on the challenge of gullibility. LLMs believe anything you tell them. Any systems
    that attempts to make meaningful decisions on your behalf will run into the same
    roadblock: how good is a travel agent, or a digital assistant, or even a research
    tool if it can’t distinguish truth from fiction?

    Just the other day Google Search was caught serving up an entirely fake description
    of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined
    movie listing from a fan fiction wiki.'
  - 'DeepSeek v3 is a huge 685B parameter model—one of the largest openly licensed
    models currently available, significantly bigger than the largest of Meta’s Llama
    series, Llama 3.1 405B.

    Benchmarks put it up there with Claude 3.5 Sonnet. Vibe benchmarks (aka the Chatbot
    Arena) currently rank it 7th, just behind the Gemini 2.0 and OpenAI 4o/o1 models.
    This is by far the highest ranking openly licensed model.

    The really impressive thing about DeepSeek v3 is the training cost. The model
    was trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. Llama
    3.1 405B trained 30,840,000 GPU hours—11x that used by DeepSeek v3, for a model
    that benchmarks slightly worse.'
  - 'Against this photo of butterflies at the California Academy of Sciences:



    A shallow dish, likely a hummingbird or butterfly feeder, is red.  Pieces of orange
    slices of fruit are visible inside the dish.

    Two butterflies are positioned in the feeder, one is a dark brown/black butterfly
    with white/cream-colored markings.  The other is a large, brown butterfly with
    patterns of lighter brown, beige, and black markings, including prominent eye
    spots. The larger brown butterfly appears to be feeding on the fruit.'
- source_sentence: How does the author compare the difficulty of training an LLM to
    another complex task?
  sentences:
  - '“Agents” still haven’t really happened yet

    I find the term “agents” extremely frustrating. It lacks a single, clear and widely
    understood meaning... but the people who use the term never seem to acknowledge
    that.

    If you tell me that you are building “agents”, you’ve conveyed almost no information
    to me at all. Without reading your mind I have no way of telling which of the
    dozens of possible definitions you are talking about.'
  - 'So training an LLM still isn’t something a hobbyist can afford, but it’s no longer
    the sole domain of the super-rich. I like to compare the difficulty of training
    an LLM to that of building a suspension bridge—not trivial, but hundreds of countries
    around the world have figured out how to do it. (Correction: Wikipedia’s Suspension
    bridges by country category lists 44 countries).

    You can run LLMs on your own devices

    In January of this year, I thought it would be years before I could run a useful
    LLM on my own computer. GPT-3 and 3.5 were pretty much the only games in town,
    and I thought that even if the model weights were available it would take a $10,000+
    server to run them.'
  - 'This prompt-driven custom interface feature is so powerful and easy to build
    (once you’ve figured out the gnarly details of browser sandboxing) that I expect
    it to show up as a feature in a wide range of products in 2025.

    Universal access to the best models lasted for just a few short months

    For a few short months this year all three of the best available models—GPT-4o,
    Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely available to most of the world.'
- source_sentence: What is the new approach to scaling models mentioned in the context?
  sentences:
  - 'So far, I think they’re a net positive. I’ve used them on a personal level to
    improve my productivity (and entertain myself) in all sorts of different ways.
    I think people who learn how to use them effectively can gain a significant boost
    to their quality of life.

    A lot of people are yet to be sold on their value! Some think their negatives
    outweigh their positives, some think they are all hot air, and some even think
    they represent an existential threat to humanity.

    They’re actually quite easy to build

    The most surprising thing we’ve learned about LLMs this year is that they’re actually
    quite easy to build.'
  - 'The biggest innovation here is that it opens up a new way to scale a model: instead
    of improving model performance purely through additional compute at training time,
    models can now take on harder problems by spending more compute on inference.

    The sequel to o1, o3 (they skipped “o2” for European trademark reasons) was announced
    on 20th December with an impressive result against the ARC-AGI benchmark, albeit
    one that likely involved more than $1,000,000 of compute time expense!

    o3 is expected to ship in January. I doubt many people have real-world problems
    that would benefit from that level of compute expenditure—I certainly don’t!—but
    it appears to be a genuine next step in LLM architecture for taking on much harder
    problems.'
  - 'Language Models are gullible. They “believe” what we tell them—what’s in their
    training data, then what’s in the fine-tuning data, then what’s in the prompt.

    In order to be useful tools for us, we need them to believe what we feed them!

    But it turns out a lot of the things we want to build need them not to be gullible.

    Everyone wants an AI personal assistant. If you hired a real-world personal assistant
    who believed everything that anyone told them, you would quickly find that their
    ability to positively impact your life was severely limited.'
- source_sentence: When was Anthropic’s Claude 3 series initially launched?
  sentences:
  - 'Prompt injection is a natural consequence of this gulibility. I’ve seen precious
    little progress on tackling that problem in 2024, and we’ve been talking about
    it since September 2022.

    I’m beginning to see the most popular idea of “agents” as dependent on AGI itself.
    A model that’s robust against gulliblity is a very tall order indeed.

    Evals really matter

    Anthropic’s Amanda Askell (responsible for much of the work behind Claude’s Character):'
  - 'A year ago, the only organization that had released a generally useful LLM was
    OpenAI. We’ve now seen better-than-GPT-3 class models produced by Anthropic, Mistral,
    Google, Meta, EleutherAI, Stability AI, TII in Abu Dhabi (Falcon), Microsoft Research,
    xAI, Replit, Baidu and a bunch of other organizations.

    The training cost (hardware and electricity) is still significant—initially millions
    of dollars, but that seems to have dropped to the tens of thousands already. Microsoft’s
    Phi-2 claims to have used “14 days on 96 A100 GPUs”, which works out at around
    $35,000 using current Lambda pricing.'
  - 'Getting back to models that beat GPT-4: Anthropic’s Claude 3 series launched
    in March, and Claude 3 Opus quickly became my new favourite daily-driver. They
    upped the ante even more in June with the launch of Claude 3.5 Sonnet—a model
    that is still my favourite six months later (though it got a significant upgrade
    on October 22, confusingly keeping the same 3.5 version number. Anthropic fans
    have since taken to calling it Claude 3.6).'
- source_sentence: Why might fine-tuning an existing LLM be more accessible to hobbyists
    than training one from scratch?
  sentences:
  - 'I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model)
    on my iPhone. You can install several different apps to get your own, local, completely
    private LLM. My own LLM project provides a CLI tool for running an array of different
    models via plugins.

    You can even run them entirely in your browser using WebAssembly and the latest
    Chrome!

    Hobbyists can build their own fine-tuned models

    I said earlier that building an LLM was still out of reach of hobbyists. That
    may be true for training from scratch, but fine-tuning one of those models is
    another matter entirely.'
  - 'Intuitively, one would expect that systems this powerful would take millions
    of lines of complex code. Instead, it turns out a few hundred lines of Python
    is genuinely enough to train a basic version!

    What matters most is the training  data. You need a lot of data to make these
    things work, and the quantity and quality of the training data appears to be the
    most important factor in how good the resulting model is.

    If you can gather the right data, and afford to pay for the GPUs to train it,
    you can build an LLM.'
  - 'Nothing yet from Anthropic or Meta but I would be very surprised if they don’t
    have their own inference-scaling models in the works. Meta published a relevant
    paper Training Large Language Models to Reason in a Continuous Latent Space in
    December.

    Was the best currently available LLM trained in China for less than $6m?

    Not quite, but almost! It does make for a great attention-grabbing headline.

    The big news to end the year was the release of DeepSeek v3—dropped on Hugging
    Face on Christmas Day without so much as a README file, then followed by documentation
    and a paper the day after that.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
  results:
  - task:
      type: information-retrieval
      name: Information Retrieval
    dataset:
      name: Unknown
      type: unknown
    metrics:
    - type: cosine_accuracy@1
      value: 0.9166666666666666
      name: Cosine Accuracy@1
    - type: cosine_accuracy@3
      value: 1.0
      name: Cosine Accuracy@3
    - type: cosine_accuracy@5
      value: 1.0
      name: Cosine Accuracy@5
    - type: cosine_accuracy@10
      value: 1.0
      name: Cosine Accuracy@10
    - type: cosine_precision@1
      value: 0.9166666666666666
      name: Cosine Precision@1
    - type: cosine_precision@3
      value: 0.3333333333333333
      name: Cosine Precision@3
    - type: cosine_precision@5
      value: 0.20000000000000004
      name: Cosine Precision@5
    - type: cosine_precision@10
      value: 0.10000000000000002
      name: Cosine Precision@10
    - type: cosine_recall@1
      value: 0.9166666666666666
      name: Cosine Recall@1
    - type: cosine_recall@3
      value: 1.0
      name: Cosine Recall@3
    - type: cosine_recall@5
      value: 1.0
      name: Cosine Recall@5
    - type: cosine_recall@10
      value: 1.0
      name: Cosine Recall@10
    - type: cosine_ndcg@10
      value: 0.9692441461309548
      name: Cosine Ndcg@10
    - type: cosine_mrr@10
      value: 0.9583333333333334
      name: Cosine Mrr@10
    - type: cosine_map@100
      value: 0.9583333333333334
      name: Cosine Map@100
---

# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

## Model Details

### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->

### Model Sources

- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)

### Full Model Architecture

```
SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)
```

## Usage

### Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

```bash
pip install -U sentence-transformers
```

Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("dwb2023/legal-ft-c53d04b6-ee03-4160-9525-a7af282c08e8")
# Run inference
sentences = [
    'Why might fine-tuning an existing LLM be more accessible to hobbyists than training one from scratch?',
    'I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model) on my iPhone. You can install several different apps to get your own, local, completely private LLM. My own LLM project provides a CLI tool for running an array of different models via plugins.\nYou can even run them entirely in your browser using WebAssembly and the latest Chrome!\nHobbyists can build their own fine-tuned models\nI said earlier that building an LLM was still out of reach of hobbyists. That may be true for training from scratch, but fine-tuning one of those models is another matter entirely.',
    'Nothing yet from Anthropic or Meta but I would be very surprised if they don’t have their own inference-scaling models in the works. Meta published a relevant paper Training Large Language Models to Reason in a Continuous Latent Space in December.\nWas the best currently available LLM trained in China for less than $6m?\nNot quite, but almost! It does make for a great attention-grabbing headline.\nThe big news to end the year was the release of DeepSeek v3—dropped on Hugging Face on Christmas Day without so much as a README file, then followed by documentation and a paper the day after that.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```

<!--
### Direct Usage (Transformers)

<details><summary>Click to see the direct usage in Transformers</summary>

</details>
-->

<!--
### Downstream Usage (Sentence Transformers)

You can finetune this model on your own dataset.

<details><summary>Click to expand</summary>

</details>
-->

<!--
### Out-of-Scope Use

*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->

## Evaluation

### Metrics

#### Information Retrieval

* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)

| Metric              | Value      |
|:--------------------|:-----------|
| cosine_accuracy@1   | 0.9167     |
| cosine_accuracy@3   | 1.0        |
| cosine_accuracy@5   | 1.0        |
| cosine_accuracy@10  | 1.0        |
| cosine_precision@1  | 0.9167     |
| cosine_precision@3  | 0.3333     |
| cosine_precision@5  | 0.2        |
| cosine_precision@10 | 0.1        |
| cosine_recall@1     | 0.9167     |
| cosine_recall@3     | 1.0        |
| cosine_recall@5     | 1.0        |
| cosine_recall@10    | 1.0        |
| **cosine_ndcg@10**  | **0.9692** |
| cosine_mrr@10       | 0.9583     |
| cosine_map@100      | 0.9583     |

<!--
## Bias, Risks and Limitations

*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->

<!--
### Recommendations

*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->

## Training Details

### Training Dataset

#### Unnamed Dataset

* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
  |         | sentence_0                                                                         | sentence_1                                                                           |
  |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
  | type    | string                                                                             | string                                                                               |
  | details | <ul><li>min: 12 tokens</li><li>mean: 20.94 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.14 tokens</li><li>max: 214 tokens</li></ul> |
* Samples:
  | sentence_0                                                                                   | sentence_1                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
  |:---------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
  | <code>When did Meta release the original Llama model?</code>                                 | <code>Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.<br>I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!<br>This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.<br>Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.</code>                                                                                                    |
  | <code>What was significant about the release of Llama 2 in July?</code>                      | <code>Then in February, Meta released Llama. And a few weeks later in March, Georgi Gerganov released code that got it working on a MacBook.<br>I wrote about how Large language models are having their Stable Diffusion moment, and with hindsight that was a very good call!<br>This unleashed a whirlwind of innovation, which was accelerated further in July when Meta released Llama 2—an improved version which, crucially, included permission for commercial use.<br>Today there are literally thousands of LLMs that can be run locally, on all manner of different devices.</code>                                                                                                    |
  | <code>What are some companies mentioned that have developed multi-modal audio models?</code> | <code>Your browser does not support the audio element.<br><br>OpenAI aren’t the only group with a multi-modal audio model. Google’s Gemini also accepts audio input, and the Google Gemini apps can speak in a similar way to ChatGPT now. Amazon also pre-announced voice mode for Amazon Nova, but that’s meant to roll out in Q1 of 2025.<br>Google’s NotebookLM, released in September, took audio output to a new level by producing spookily realistic conversations between two “podcast hosts” about anything you fed into their tool. They later added custom instructions, so naturally I turned them into pelicans:<br><br><br>Your browser does not support the audio element.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
  ```json
  {
      "loss": "MultipleNegativesRankingLoss",
      "matryoshka_dims": [
          768,
          512,
          256,
          128,
          64
      ],
      "matryoshka_weights": [
          1,
          1,
          1,
          1,
          1
      ],
      "n_dims_per_step": -1
  }
  ```

### Training Hyperparameters
#### Non-Default Hyperparameters

- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin

#### All Hyperparameters
<details><summary>Click to expand</summary>

- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`: 
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin

</details>

### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0   | 16   | 0.9638         |
| 2.0   | 32   | 0.9638         |
| 3.0   | 48   | 0.9692         |
| 3.125 | 50   | 0.9692         |
| 4.0   | 64   | 0.9692         |
| 5.0   | 80   | 0.9539         |
| 6.0   | 96   | 0.9539         |
| 6.25  | 100  | 0.9539         |
| 7.0   | 112  | 0.9539         |
| 8.0   | 128  | 0.9539         |
| 9.0   | 144  | 0.9692         |
| 9.375 | 150  | 0.9692         |
| 10.0  | 160  | 0.9692         |


### Framework Versions
- Python: 3.11.12
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1

## Citation

### BibTeX

#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
```

#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}
```

#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
```

<!--
## Glossary

*Clearly define terms in order to be accessible across audiences.*
-->

<!--
## Model Card Authors

*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->

<!--
## Model Card Contact

*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->