dwb2023 commited on
Commit
c6203d4
·
verified ·
1 Parent(s): 1d6eab5

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,755 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:157
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: Why does the author recommend reading the first few pages of the
13
+ 69-page PDF document related to the lawsuit?
14
+ sentences:
15
+ - 'We don’t yet know how to build GPT-4
16
+
17
+ Frustratingly, despite the enormous leaps ahead we’ve had this year, we are yet
18
+ to see an alternative model that’s better than GPT-4.
19
+
20
+ OpenAI released GPT-4 in March, though it later turned out we had a sneak peak
21
+ of it in February when Microsoft used it as part of the new Bing.
22
+
23
+ This may well change in the next few weeks: Google’s Gemini Ultra has big claims,
24
+ but isn’t yet available for us to try out.
25
+
26
+ The team behind Mistral are working to beat GPT-4 as well, and their track record
27
+ is already extremely strong considering their first public model only came out
28
+ in September, and they’ve released two significant improvements since then.'
29
+ - 'Just this week, the New York Times launched a landmark lawsuit against OpenAI
30
+ and Microsoft over this issue. The 69 page PDF is genuinely worth reading—especially
31
+ the first few pages, which lay out the issues in a way that’s surprisingly easy
32
+ to follow. The rest of the document includes some of the clearest explanations
33
+ of what LLMs are, how they work and how they are built that I’ve read anywhere.
34
+
35
+ The legal arguments here are complex. I’m not a lawyer, but I don’t think this
36
+ one will be easily decided. Whichever way it goes, I expect this case to have
37
+ a profound impact on how this technology develops in the future.'
38
+ - 'Nothing yet from Anthropic or Meta but I would be very surprised if they don’t
39
+ have their own inference-scaling models in the works. Meta published a relevant
40
+ paper Training Large Language Models to Reason in a Continuous Latent Space in
41
+ December.
42
+
43
+ Was the best currently available LLM trained in China for less than $6m?
44
+
45
+ Not quite, but almost! It does make for a great attention-grabbing headline.
46
+
47
+ The big news to end the year was the release of DeepSeek v3—dropped on Hugging
48
+ Face on Christmas Day without so much as a README file, then followed by documentation
49
+ and a paper the day after that.'
50
+ - source_sentence: Why does the author find the term “agents” frustrating?
51
+ sentences:
52
+ - 'Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac talks about
53
+ Qwen2.5-Coder-32B in November—an Apache 2.0 licensed model!
54
+
55
+
56
+ I can now run a GPT-4 class model on my laptop talks about running Meta’s Llama
57
+ 3.3 70B (released in December)'
58
+ - '“Agents” still haven’t really happened yet
59
+
60
+ I find the term “agents” extremely frustrating. It lacks a single, clear and widely
61
+ understood meaning... but the people who use the term never seem to acknowledge
62
+ that.
63
+
64
+ If you tell me that you are building “agents”, you’ve conveyed almost no information
65
+ to me at all. Without reading your mind I have no way of telling which of the
66
+ dozens of possible definitions you are talking about.'
67
+ - 'Terminology aside, I remain skeptical as to their utility based, once again,
68
+ on the challenge of gullibility. LLMs believe anything you tell them. Any systems
69
+ that attempts to make meaningful decisions on your behalf will run into the same
70
+ roadblock: how good is a travel agent, or a digital assistant, or even a research
71
+ tool if it can’t distinguish truth from fiction?
72
+
73
+ Just the other day Google Search was caught serving up an entirely fake description
74
+ of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined
75
+ movie listing from a fan fiction wiki.'
76
+ - source_sentence: Which company released the QwQ model under an Apache 20 license?
77
+ sentences:
78
+ - 'Embeddings: What they are and why they matter
79
+
80
+ 61.7k
81
+
82
+ 79.3k
83
+
84
+
85
+
86
+ Catching up on the weird world of LLMs
87
+
88
+ 61.6k
89
+
90
+ 85.9k
91
+
92
+
93
+
94
+ llamafile is the new best way to run an LLM on your own computer
95
+
96
+ 52k
97
+
98
+ 66k
99
+
100
+
101
+
102
+ Prompt injection explained, with video, slides, and a transcript
103
+
104
+ 51k
105
+
106
+ 61.9k
107
+
108
+
109
+
110
+ AI-enhanced development makes me more ambitious with my projects
111
+
112
+ 49.6k
113
+
114
+ 60.1k
115
+
116
+
117
+
118
+ Understanding GPT tokenizers
119
+
120
+ 49.5k
121
+
122
+ 61.1k
123
+
124
+
125
+
126
+ Exploring GPTs: ChatGPT in a trench coat?
127
+
128
+ 46.4k
129
+
130
+ 58.5k
131
+
132
+
133
+
134
+ Could you train a ChatGPT-beating model for $85,000 and run it in a browser?
135
+
136
+ 40.5k
137
+
138
+ 49.2k
139
+
140
+
141
+
142
+ How to implement Q&A against your documentation with GPT3, embeddings and Datasette
143
+
144
+ 37.3k
145
+
146
+ 44.9k
147
+
148
+
149
+
150
+ Lawyer cites fake cases invented by ChatGPT, judge is not amused
151
+
152
+ 37.1k
153
+
154
+ 47.4k'
155
+ - 'OpenAI are not the only game in town here. Google released their first entrant
156
+ in the category, gemini-2.0-flash-thinking-exp, on December 19th.
157
+
158
+ Alibaba’s Qwen team released their QwQ model on November 28th—under an Apache
159
+ 2.0 license, and that one I could run on my own machine. They followed that up
160
+ with a vision reasoning model called QvQ on December 24th, which I also ran locally.
161
+
162
+ DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out through
163
+ their chat interface on November 20th.
164
+
165
+ To understand more about inference scaling I recommend Is AI progress slowing
166
+ down? by Arvind Narayanan and Sayash Kapoor.'
167
+ - 'Against this photo of butterflies at the California Academy of Sciences:
168
+
169
+
170
+
171
+ A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange
172
+ slices of fruit are visible inside the dish.
173
+
174
+ Two butterflies are positioned in the feeder, one is a dark brown/black butterfly
175
+ with white/cream-colored markings. The other is a large, brown butterfly with
176
+ patterns of lighter brown, beige, and black markings, including prominent eye
177
+ spots. The larger brown butterfly appears to be feeding on the fruit.'
178
+ - source_sentence: How does the 2024 review of Large Language Models build upon the
179
+ insights from the 2023 review?
180
+ sentences:
181
+ - 'Law is not ethics. Is it OK to train models on people’s content without their
182
+ permission, when those models will then be used in ways that compete with those
183
+ people?
184
+
185
+ As the quality of results produced by AI models has increased over the year, these
186
+ questions have become even more pressing.
187
+
188
+ The impact on human society in terms of these models is already huge, if difficult
189
+ to objectively measure.
190
+
191
+ People have certainly lost work to them—anecdotally, I’ve seen this for copywriters,
192
+ artists and translators.
193
+
194
+ There are a great deal of untold stories here. I’m hoping 2024 sees significant
195
+ amounts of dedicated journalism on this topic.
196
+
197
+ My blog in 2023
198
+
199
+ Here’s a tag cloud for content I posted to my blog in 2023 (generated using Django
200
+ SQL Dashboard):'
201
+ - 'The GPT-4 barrier was comprehensively broken
202
+
203
+ In my December 2023 review I wrote about how We don’t yet know how to build GPT-4—OpenAI’s
204
+ best model was almost a year old at that point, yet no other AI lab had produced
205
+ anything better. What did OpenAI know that the rest of us didn’t?
206
+
207
+ I’m relieved that this has changed completely in the past twelve months. 18 organizations
208
+ now have models on the Chatbot Arena Leaderboard that rank higher than the original
209
+ GPT-4 from March 2023 (GPT-4-0314 on the board)—70 models in total.'
210
+ - 'Things we learned about LLMs in 2024
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+
219
+
220
+
221
+
222
+
223
+
224
+
225
+
226
+
227
+
228
+
229
+
230
+
231
+
232
+
233
+ Simon Willison’s Weblog
234
+
235
+ Subscribe
236
+
237
+
238
+
239
+
240
+
241
+
242
+
243
+ Things we learned about LLMs in 2024
244
+
245
+ 31st December 2024
246
+
247
+ A lot has happened in the world of Large Language Models over the course of 2024.
248
+ Here’s a review of things we figured out about the field in the past twelve months,
249
+ plus my attempt at identifying key themes and pivotal moments.
250
+
251
+ This is a sequel to my review of 2023.
252
+
253
+ In this article:'
254
+ - source_sentence: What is the challenge in building AI personal assistants based
255
+ on the gullibility of language models?
256
+ sentences:
257
+ - 'Language Models are gullible. They “believe” what we tell them—what’s in their
258
+ training data, then what’s in the fine-tuning data, then what’s in the prompt.
259
+
260
+ In order to be useful tools for us, we need them to believe what we feed them!
261
+
262
+ But it turns out a lot of the things we want to build need them not to be gullible.
263
+
264
+ Everyone wants an AI personal assistant. If you hired a real-world personal assistant
265
+ who believed everything that anyone told them, you would quickly find that their
266
+ ability to positively impact your life was severely limited.'
267
+ - 'Large Language Models
268
+
269
+ They’re actually quite easy to build
270
+
271
+ You can run LLMs on your own devices
272
+
273
+ Hobbyists can build their own fine-tuned models
274
+
275
+ We don’t yet know how to build GPT-4
276
+
277
+ Vibes Based Development
278
+
279
+ LLMs are really smart, and also really, really dumb
280
+
281
+ Gullibility is the biggest unsolved problem
282
+
283
+ Code may be the best application
284
+
285
+ The ethics of this space remain diabolically complex
286
+
287
+ My blog in 2023'
288
+ - 'These price drops are driven by two factors: increased competition and increased
289
+ efficiency. The efficiency thing is really important for everyone who is concerned
290
+ about the environmental impact of LLMs. These price drops tie directly to how
291
+ much energy is being used for running prompts.
292
+
293
+ There’s still plenty to worry about with respect to the environmental impact of
294
+ the great AI datacenter buildout, but a lot of the concerns over the energy cost
295
+ of individual prompts are no longer credible.
296
+
297
+ Here’s a fun napkin calculation: how much would it cost to generate short descriptions
298
+ of every one of the 68,000 photos in my personal photo library using Google’s
299
+ Gemini 1.5 Flash 8B (released in October), their cheapest model?'
300
+ pipeline_tag: sentence-similarity
301
+ library_name: sentence-transformers
302
+ metrics:
303
+ - cosine_accuracy@1
304
+ - cosine_accuracy@3
305
+ - cosine_accuracy@5
306
+ - cosine_accuracy@10
307
+ - cosine_precision@1
308
+ - cosine_precision@3
309
+ - cosine_precision@5
310
+ - cosine_precision@10
311
+ - cosine_recall@1
312
+ - cosine_recall@3
313
+ - cosine_recall@5
314
+ - cosine_recall@10
315
+ - cosine_ndcg@10
316
+ - cosine_mrr@10
317
+ - cosine_map@100
318
+ model-index:
319
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
320
+ results:
321
+ - task:
322
+ type: information-retrieval
323
+ name: Information Retrieval
324
+ dataset:
325
+ name: Unknown
326
+ type: unknown
327
+ metrics:
328
+ - type: cosine_accuracy@1
329
+ value: 0.9583333333333334
330
+ name: Cosine Accuracy@1
331
+ - type: cosine_accuracy@3
332
+ value: 1.0
333
+ name: Cosine Accuracy@3
334
+ - type: cosine_accuracy@5
335
+ value: 1.0
336
+ name: Cosine Accuracy@5
337
+ - type: cosine_accuracy@10
338
+ value: 1.0
339
+ name: Cosine Accuracy@10
340
+ - type: cosine_precision@1
341
+ value: 0.9583333333333334
342
+ name: Cosine Precision@1
343
+ - type: cosine_precision@3
344
+ value: 0.3333333333333333
345
+ name: Cosine Precision@3
346
+ - type: cosine_precision@5
347
+ value: 0.20000000000000004
348
+ name: Cosine Precision@5
349
+ - type: cosine_precision@10
350
+ value: 0.10000000000000002
351
+ name: Cosine Precision@10
352
+ - type: cosine_recall@1
353
+ value: 0.9583333333333334
354
+ name: Cosine Recall@1
355
+ - type: cosine_recall@3
356
+ value: 1.0
357
+ name: Cosine Recall@3
358
+ - type: cosine_recall@5
359
+ value: 1.0
360
+ name: Cosine Recall@5
361
+ - type: cosine_recall@10
362
+ value: 1.0
363
+ name: Cosine Recall@10
364
+ - type: cosine_ndcg@10
365
+ value: 0.9846220730654774
366
+ name: Cosine Ndcg@10
367
+ - type: cosine_mrr@10
368
+ value: 0.9791666666666666
369
+ name: Cosine Mrr@10
370
+ - type: cosine_map@100
371
+ value: 0.9791666666666666
372
+ name: Cosine Map@100
373
+ ---
374
+
375
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
376
+
377
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
378
+
379
+ ## Model Details
380
+
381
+ ### Model Description
382
+ - **Model Type:** Sentence Transformer
383
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
384
+ - **Maximum Sequence Length:** 512 tokens
385
+ - **Output Dimensionality:** 1024 dimensions
386
+ - **Similarity Function:** Cosine Similarity
387
+ <!-- - **Training Dataset:** Unknown -->
388
+ <!-- - **Language:** Unknown -->
389
+ <!-- - **License:** Unknown -->
390
+
391
+ ### Model Sources
392
+
393
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
394
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
395
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
396
+
397
+ ### Full Model Architecture
398
+
399
+ ```
400
+ SentenceTransformer(
401
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
402
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
403
+ (2): Normalize()
404
+ )
405
+ ```
406
+
407
+ ## Usage
408
+
409
+ ### Direct Usage (Sentence Transformers)
410
+
411
+ First install the Sentence Transformers library:
412
+
413
+ ```bash
414
+ pip install -U sentence-transformers
415
+ ```
416
+
417
+ Then you can load this model and run inference.
418
+ ```python
419
+ from sentence_transformers import SentenceTransformer
420
+
421
+ # Download from the 🤗 Hub
422
+ model = SentenceTransformer("dwb2023/legal-ft-794455c7-1bee-466a-8110-133f086ed907")
423
+ # Run inference
424
+ sentences = [
425
+ 'What is the challenge in building AI personal assistants based on the gullibility of language models?',
426
+ 'Language Models are gullible. They “believe” what we tell them—what’s in their training data, then what’s in the fine-tuning data, then what’s in the prompt.\nIn order to be useful tools for us, we need them to believe what we feed them!\nBut it turns out a lot of the things we want to build need them not to be gullible.\nEveryone wants an AI personal assistant. If you hired a real-world personal assistant who believed everything that anyone told them, you would quickly find that their ability to positively impact your life was severely limited.',
427
+ 'These price drops are driven by two factors: increased competition and increased efficiency. The efficiency thing is really important for everyone who is concerned about the environmental impact of LLMs. These price drops tie directly to how much energy is being used for running prompts.\nThere’s still plenty to worry about with respect to the environmental impact of the great AI datacenter buildout, but a lot of the concerns over the energy cost of individual prompts are no longer credible.\nHere’s a fun napkin calculation: how much would it cost to generate short descriptions of every one of the 68,000 photos in my personal photo library using Google’s Gemini 1.5 Flash 8B (released in October), their cheapest model?',
428
+ ]
429
+ embeddings = model.encode(sentences)
430
+ print(embeddings.shape)
431
+ # [3, 1024]
432
+
433
+ # Get the similarity scores for the embeddings
434
+ similarities = model.similarity(embeddings, embeddings)
435
+ print(similarities.shape)
436
+ # [3, 3]
437
+ ```
438
+
439
+ <!--
440
+ ### Direct Usage (Transformers)
441
+
442
+ <details><summary>Click to see the direct usage in Transformers</summary>
443
+
444
+ </details>
445
+ -->
446
+
447
+ <!--
448
+ ### Downstream Usage (Sentence Transformers)
449
+
450
+ You can finetune this model on your own dataset.
451
+
452
+ <details><summary>Click to expand</summary>
453
+
454
+ </details>
455
+ -->
456
+
457
+ <!--
458
+ ### Out-of-Scope Use
459
+
460
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
461
+ -->
462
+
463
+ ## Evaluation
464
+
465
+ ### Metrics
466
+
467
+ #### Information Retrieval
468
+
469
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
470
+
471
+ | Metric | Value |
472
+ |:--------------------|:-----------|
473
+ | cosine_accuracy@1 | 0.9583 |
474
+ | cosine_accuracy@3 | 1.0 |
475
+ | cosine_accuracy@5 | 1.0 |
476
+ | cosine_accuracy@10 | 1.0 |
477
+ | cosine_precision@1 | 0.9583 |
478
+ | cosine_precision@3 | 0.3333 |
479
+ | cosine_precision@5 | 0.2 |
480
+ | cosine_precision@10 | 0.1 |
481
+ | cosine_recall@1 | 0.9583 |
482
+ | cosine_recall@3 | 1.0 |
483
+ | cosine_recall@5 | 1.0 |
484
+ | cosine_recall@10 | 1.0 |
485
+ | **cosine_ndcg@10** | **0.9846** |
486
+ | cosine_mrr@10 | 0.9792 |
487
+ | cosine_map@100 | 0.9792 |
488
+
489
+ <!--
490
+ ## Bias, Risks and Limitations
491
+
492
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
493
+ -->
494
+
495
+ <!--
496
+ ### Recommendations
497
+
498
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
499
+ -->
500
+
501
+ ## Training Details
502
+
503
+ ### Training Dataset
504
+
505
+ #### Unnamed Dataset
506
+
507
+ * Size: 157 training samples
508
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
509
+ * Approximate statistics based on the first 157 samples:
510
+ | | sentence_0 | sentence_1 |
511
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
512
+ | type | string | string |
513
+ | details | <ul><li>min: 2 tokens</li><li>mean: 20.94 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.72 tokens</li><li>max: 214 tokens</li></ul> |
514
+ * Samples:
515
+ | sentence_0 | sentence_1 |
516
+ |:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
517
+ | <code>What was the typical context length accepted by most models last year?</code> | <code>Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.</code> |
518
+ | <code>How many tokens can Google’s Gemini series accept in 2024?</code> | <code>Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable exception of Claude 2.1 which accepted 200,000. Today every serious provider has a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.</code> |
519
+ | <code>What are the new capabilities introduced by Google’s Gemini 15 Pro?</code> | <code>The earliest of those was Google’s Gemini 1.5 Pro, released in February. In addition to producing GPT-4 level outputs, it introduced several brand new capabilities to the field—most notably its 1 million (and then later 2 million) token input context length, and the ability to input video.<br>I wrote about this at the time in The killer app of Gemini Pro 1.5 is video, which earned me a short appearance as a talking head in the Google I/O opening keynote in May.</code> |
520
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
521
+ ```json
522
+ {
523
+ "loss": "MultipleNegativesRankingLoss",
524
+ "matryoshka_dims": [
525
+ 768,
526
+ 512,
527
+ 256,
528
+ 128,
529
+ 64
530
+ ],
531
+ "matryoshka_weights": [
532
+ 1,
533
+ 1,
534
+ 1,
535
+ 1,
536
+ 1
537
+ ],
538
+ "n_dims_per_step": -1
539
+ }
540
+ ```
541
+
542
+ ### Training Hyperparameters
543
+ #### Non-Default Hyperparameters
544
+
545
+ - `eval_strategy`: steps
546
+ - `per_device_train_batch_size`: 10
547
+ - `per_device_eval_batch_size`: 10
548
+ - `num_train_epochs`: 10
549
+ - `multi_dataset_batch_sampler`: round_robin
550
+
551
+ #### All Hyperparameters
552
+ <details><summary>Click to expand</summary>
553
+
554
+ - `overwrite_output_dir`: False
555
+ - `do_predict`: False
556
+ - `eval_strategy`: steps
557
+ - `prediction_loss_only`: True
558
+ - `per_device_train_batch_size`: 10
559
+ - `per_device_eval_batch_size`: 10
560
+ - `per_gpu_train_batch_size`: None
561
+ - `per_gpu_eval_batch_size`: None
562
+ - `gradient_accumulation_steps`: 1
563
+ - `eval_accumulation_steps`: None
564
+ - `torch_empty_cache_steps`: None
565
+ - `learning_rate`: 5e-05
566
+ - `weight_decay`: 0.0
567
+ - `adam_beta1`: 0.9
568
+ - `adam_beta2`: 0.999
569
+ - `adam_epsilon`: 1e-08
570
+ - `max_grad_norm`: 1
571
+ - `num_train_epochs`: 10
572
+ - `max_steps`: -1
573
+ - `lr_scheduler_type`: linear
574
+ - `lr_scheduler_kwargs`: {}
575
+ - `warmup_ratio`: 0.0
576
+ - `warmup_steps`: 0
577
+ - `log_level`: passive
578
+ - `log_level_replica`: warning
579
+ - `log_on_each_node`: True
580
+ - `logging_nan_inf_filter`: True
581
+ - `save_safetensors`: True
582
+ - `save_on_each_node`: False
583
+ - `save_only_model`: False
584
+ - `restore_callback_states_from_checkpoint`: False
585
+ - `no_cuda`: False
586
+ - `use_cpu`: False
587
+ - `use_mps_device`: False
588
+ - `seed`: 42
589
+ - `data_seed`: None
590
+ - `jit_mode_eval`: False
591
+ - `use_ipex`: False
592
+ - `bf16`: False
593
+ - `fp16`: False
594
+ - `fp16_opt_level`: O1
595
+ - `half_precision_backend`: auto
596
+ - `bf16_full_eval`: False
597
+ - `fp16_full_eval`: False
598
+ - `tf32`: None
599
+ - `local_rank`: 0
600
+ - `ddp_backend`: None
601
+ - `tpu_num_cores`: None
602
+ - `tpu_metrics_debug`: False
603
+ - `debug`: []
604
+ - `dataloader_drop_last`: False
605
+ - `dataloader_num_workers`: 0
606
+ - `dataloader_prefetch_factor`: None
607
+ - `past_index`: -1
608
+ - `disable_tqdm`: False
609
+ - `remove_unused_columns`: True
610
+ - `label_names`: None
611
+ - `load_best_model_at_end`: False
612
+ - `ignore_data_skip`: False
613
+ - `fsdp`: []
614
+ - `fsdp_min_num_params`: 0
615
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
616
+ - `tp_size`: 0
617
+ - `fsdp_transformer_layer_cls_to_wrap`: None
618
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
619
+ - `deepspeed`: None
620
+ - `label_smoothing_factor`: 0.0
621
+ - `optim`: adamw_torch
622
+ - `optim_args`: None
623
+ - `adafactor`: False
624
+ - `group_by_length`: False
625
+ - `length_column_name`: length
626
+ - `ddp_find_unused_parameters`: None
627
+ - `ddp_bucket_cap_mb`: None
628
+ - `ddp_broadcast_buffers`: False
629
+ - `dataloader_pin_memory`: True
630
+ - `dataloader_persistent_workers`: False
631
+ - `skip_memory_metrics`: True
632
+ - `use_legacy_prediction_loop`: False
633
+ - `push_to_hub`: False
634
+ - `resume_from_checkpoint`: None
635
+ - `hub_model_id`: None
636
+ - `hub_strategy`: every_save
637
+ - `hub_private_repo`: None
638
+ - `hub_always_push`: False
639
+ - `gradient_checkpointing`: False
640
+ - `gradient_checkpointing_kwargs`: None
641
+ - `include_inputs_for_metrics`: False
642
+ - `include_for_metrics`: []
643
+ - `eval_do_concat_batches`: True
644
+ - `fp16_backend`: auto
645
+ - `push_to_hub_model_id`: None
646
+ - `push_to_hub_organization`: None
647
+ - `mp_parameters`:
648
+ - `auto_find_batch_size`: False
649
+ - `full_determinism`: False
650
+ - `torchdynamo`: None
651
+ - `ray_scope`: last
652
+ - `ddp_timeout`: 1800
653
+ - `torch_compile`: False
654
+ - `torch_compile_backend`: None
655
+ - `torch_compile_mode`: None
656
+ - `include_tokens_per_second`: False
657
+ - `include_num_input_tokens_seen`: False
658
+ - `neftune_noise_alpha`: None
659
+ - `optim_target_modules`: None
660
+ - `batch_eval_metrics`: False
661
+ - `eval_on_start`: False
662
+ - `use_liger_kernel`: False
663
+ - `eval_use_gather_object`: False
664
+ - `average_tokens_across_devices`: False
665
+ - `prompts`: None
666
+ - `batch_sampler`: batch_sampler
667
+ - `multi_dataset_batch_sampler`: round_robin
668
+
669
+ </details>
670
+
671
+ ### Training Logs
672
+ | Epoch | Step | cosine_ndcg@10 |
673
+ |:-----:|:----:|:--------------:|
674
+ | 1.0 | 16 | 0.9638 |
675
+ | 2.0 | 32 | 0.9484 |
676
+ | 3.0 | 48 | 0.9484 |
677
+ | 3.125 | 50 | 0.9484 |
678
+ | 4.0 | 64 | 0.9539 |
679
+ | 5.0 | 80 | 0.9692 |
680
+ | 6.0 | 96 | 0.9692 |
681
+ | 6.25 | 100 | 0.9692 |
682
+ | 7.0 | 112 | 0.9692 |
683
+ | 8.0 | 128 | 0.9846 |
684
+ | 9.0 | 144 | 0.9846 |
685
+ | 9.375 | 150 | 0.9846 |
686
+ | 10.0 | 160 | 0.9846 |
687
+
688
+
689
+ ### Framework Versions
690
+ - Python: 3.11.12
691
+ - Sentence Transformers: 4.1.0
692
+ - Transformers: 4.51.3
693
+ - PyTorch: 2.6.0+cu124
694
+ - Accelerate: 1.6.0
695
+ - Datasets: 3.6.0
696
+ - Tokenizers: 0.21.1
697
+
698
+ ## Citation
699
+
700
+ ### BibTeX
701
+
702
+ #### Sentence Transformers
703
+ ```bibtex
704
+ @inproceedings{reimers-2019-sentence-bert,
705
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
706
+ author = "Reimers, Nils and Gurevych, Iryna",
707
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
708
+ month = "11",
709
+ year = "2019",
710
+ publisher = "Association for Computational Linguistics",
711
+ url = "https://arxiv.org/abs/1908.10084",
712
+ }
713
+ ```
714
+
715
+ #### MatryoshkaLoss
716
+ ```bibtex
717
+ @misc{kusupati2024matryoshka,
718
+ title={Matryoshka Representation Learning},
719
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
720
+ year={2024},
721
+ eprint={2205.13147},
722
+ archivePrefix={arXiv},
723
+ primaryClass={cs.LG}
724
+ }
725
+ ```
726
+
727
+ #### MultipleNegativesRankingLoss
728
+ ```bibtex
729
+ @misc{henderson2017efficient,
730
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
731
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
732
+ year={2017},
733
+ eprint={1705.00652},
734
+ archivePrefix={arXiv},
735
+ primaryClass={cs.CL}
736
+ }
737
+ ```
738
+
739
+ <!--
740
+ ## Glossary
741
+
742
+ *Clearly define terms in order to be accessible across audiences.*
743
+ -->
744
+
745
+ <!--
746
+ ## Model Card Authors
747
+
748
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
749
+ -->
750
+
751
+ <!--
752
+ ## Model Card Contact
753
+
754
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
755
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 1024,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 4096,
12
+ "layer_norm_eps": 1e-12,
13
+ "max_position_embeddings": 512,
14
+ "model_type": "bert",
15
+ "num_attention_heads": 16,
16
+ "num_hidden_layers": 24,
17
+ "pad_token_id": 0,
18
+ "position_embedding_type": "absolute",
19
+ "torch_dtype": "float32",
20
+ "transformers_version": "4.51.3",
21
+ "type_vocab_size": 2,
22
+ "use_cache": true,
23
+ "vocab_size": 30522
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "4.1.0",
4
+ "transformers": "4.51.3",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c4d1fdbb662a11cd1299a502098db86bbc7c9d479a72fe3ae08bd77b87ac3b91
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff