drewgenai commited on
Commit
82e86d0
·
verified ·
1 Parent(s): 39b26f1

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,716 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:156
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: How do longer inputs enhance the problem-solving capabilities of
13
+ an LLM?
14
+ sentences:
15
+ - 'This remains astonishing to me. I thought a model with the capabilities and output
16
+ quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.
17
+
18
+ These models take up enough of my 64GB of RAM that I don’t run them often—they
19
+ don’t leave much room for anything else.
20
+
21
+ The fact that they run at all is a testament to the incredible training and inference
22
+ performance gains that we’ve figured out over the past year. It turns out there
23
+ was a lot of low-hanging fruit to be harvested in terms of model efficiency. I
24
+ expect there’s still more to come.'
25
+ - 'Longer inputs dramatically increase the scope of problems that can be solved
26
+ with an LLM: you can now throw in an entire book and ask questions about its contents,
27
+ but more importantly you can feed in a lot of example code to help the model correctly
28
+ solve a coding problem. LLM use-cases that involve long inputs are far more interesting
29
+ to me than short prompts that rely purely on the information already baked into
30
+ the model weights. Many of my tools were built using this pattern.'
31
+ - 'Nothing yet from Anthropic or Meta but I would be very surprised if they don’t
32
+ have their own inference-scaling models in the works. Meta published a relevant
33
+ paper Training Large Language Models to Reason in a Continuous Latent Space in
34
+ December.
35
+
36
+ Was the best currently available LLM trained in China for less than $6m?
37
+
38
+ Not quite, but almost! It does make for a great attention-grabbing headline.
39
+
40
+ The big news to end the year was the release of DeepSeek v3—dropped on Hugging
41
+ Face on Christmas Day without so much as a README file, then followed by documentation
42
+ and a paper the day after that.'
43
+ - source_sentence: What issue does the author highlight regarding the communication
44
+ of information when someone claims to be building "agents"?
45
+ sentences:
46
+ - 'Things we learned about LLMs in 2024
47
+
48
+
49
+
50
+
51
+
52
+
53
+
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+
69
+ Simon Willison’s Weblog
70
+
71
+ Subscribe
72
+
73
+
74
+
75
+
76
+
77
+
78
+
79
+ Things we learned about LLMs in 2024
80
+
81
+ 31st December 2024
82
+
83
+ A lot has happened in the world of Large Language Models over the course of 2024.
84
+ Here’s a review of things we figured out about the field in the past twelve months,
85
+ plus my attempt at identifying key themes and pivotal moments.
86
+
87
+ This is a sequel to my review of 2023.
88
+
89
+ In this article:'
90
+ - '“Agents” still haven’t really happened yet
91
+
92
+ I find the term “agents” extremely frustrating. It lacks a single, clear and widely
93
+ understood meaning... but the people who use the term never seem to acknowledge
94
+ that.
95
+
96
+ If you tell me that you are building “agents”, you’ve conveyed almost no information
97
+ to me at all. Without reading your mind I have no way of telling which of the
98
+ dozens of possible definitions you are talking about.'
99
+ - 'Prince Canuma’s excellent, fast moving mlx-vlm project brings vision LLMs to
100
+ Apple Silicon as well. I used that recently to run Qwen’s QvQ.
101
+
102
+ While MLX is a game changer, Apple’s own “Apple Intelligence” features have mostly
103
+ been a disappointment. I wrote about their initial announcement in June, and I
104
+ was optimistic that Apple had focused hard on the subset of LLM applications that
105
+ preserve user privacy and minimize the chance of users getting mislead by confusing
106
+ features.'
107
+ - source_sentence: How does the author feel about their choice of platform this year
108
+ compared to last year?
109
+ sentences:
110
+ - 'On the one hand, we keep on finding new things that LLMs can do that we didn’t
111
+ expect—and that the people who trained the models didn’t expect either. That’s
112
+ usually really fun!
113
+
114
+ But on the other hand, the things you sometimes have to do to get the models to
115
+ behave are often incredibly dumb.
116
+
117
+ Does ChatGPT get lazy in December, because its hidden system prompt includes the
118
+ current date and its training data shows that people provide less useful answers
119
+ coming up to the holidays?
120
+
121
+ The honest answer is “maybe”! No-one is entirely sure, but if you give it a different
122
+ date its answers may skew slightly longer.'
123
+ - 'I’m still trying to figure out the best patterns for doing this for my own work.
124
+ Everyone knows that evals are important, but there remains a lack of great guidance
125
+ for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
126
+ riding a bicycle benchmark is a pale imitation of what a real eval suite should
127
+ look like.
128
+
129
+ Apple Intelligence is bad, Apple’s MLX library is excellent
130
+
131
+ As a Mac user I’ve been feeling a lot better about my choice of platform this
132
+ year.
133
+
134
+ Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
135
+ was a huge disadvantage in terms of trying out new models.'
136
+ - 'One way to think about these models is an extension of the chain-of-thought prompting
137
+ trick, first explored in the May 2022 paper Large Language Models are Zero-Shot
138
+ Reasoners.
139
+
140
+ This is that trick where, if you get a model to talk out loud about a problem
141
+ it’s solving, you often get a result which the model would not have achieved otherwise.
142
+
143
+ o1 takes this process and further bakes it into the model itself. The details
144
+ are somewhat obfuscated: o1 models spend “reasoning tokens” thinking through the
145
+ problem that are not directly visible to the user (though the ChatGPT UI shows
146
+ a summary of them), then outputs a final result.'
147
+ - source_sentence: What are the implications of having a Code Interpreter equivalent
148
+ for fact-checking natural language?
149
+ sentences:
150
+ - 'I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model)
151
+ on my iPhone. You can install several different apps to get your own, local, completely
152
+ private LLM. My own LLM project provides a CLI tool for running an array of different
153
+ models via plugins.
154
+
155
+ You can even run them entirely in your browser using WebAssembly and the latest
156
+ Chrome!
157
+
158
+ Hobbyists can build their own fine-tuned models
159
+
160
+ I said earlier that building an LLM was still out of reach of hobbyists. That
161
+ may be true for training from scratch, but fine-tuning one of those models is
162
+ another matter entirely.'
163
+ - 'Now add a walrus: Prompt engineering in DALL-E 3
164
+
165
+ 32.8k
166
+
167
+ 41.2k
168
+
169
+
170
+
171
+ Web LLM runs the vicuna-7b Large Language Model entirely in your browser, and
172
+ it’s very impressive
173
+
174
+ 32.5k
175
+
176
+ 38.2k
177
+
178
+
179
+
180
+ ChatGPT can’t access the internet, even though it really looks like it can
181
+
182
+ 30.5k
183
+
184
+ 34.2k
185
+
186
+
187
+
188
+ Stanford Alpaca, and the acceleration of on-device large language model development
189
+
190
+ 29.7k
191
+
192
+ 35.7k
193
+
194
+
195
+
196
+ Run Llama 2 on your own Mac using LLM and Homebrew
197
+
198
+ 27.9k
199
+
200
+ 33.6k
201
+
202
+
203
+
204
+ Midjourney 5.1
205
+
206
+ 26.7k
207
+
208
+ 33.4k
209
+
210
+
211
+
212
+ Think of language models like ChatGPT as a “calculator for words”
213
+
214
+ 25k
215
+
216
+ 31.8k
217
+
218
+
219
+
220
+ Multi-modal prompt injection image attacks against GPT-4V
221
+
222
+ 23.7k
223
+
224
+ 27.4k'
225
+ - 'Except... you can run generated code to see if it’s correct. And with patterns
226
+ like ChatGPT Code Interpreter the LLM can execute the code itself, process the
227
+ error message, then rewrite it and keep trying until it works!
228
+
229
+ So hallucination is a much lesser problem for code generation than for anything
230
+ else. If only we had the equivalent of Code Interpreter for fact-checking natural
231
+ language!
232
+
233
+ How should we feel about this as software engineers?
234
+
235
+ On the one hand, this feels like a threat: who needs a programmer if ChatGPT can
236
+ write code for you?'
237
+ - source_sentence: How does the author compare a prompt without evals, models, and
238
+ UX to an ASML machine?
239
+ sentences:
240
+ - 'When @v0 first came out we were paranoid about protecting the prompt with all
241
+ kinds of pre and post processing complexity.
242
+
243
+ We completely pivoted to let it rip. A prompt without the evals, models, and especially
244
+ UX is like getting a broken ASML machine without a manual'
245
+ - 'Qwen2.5-Coder-32B is an LLM that can code well that runs on my Mac talks about
246
+ Qwen2.5-Coder-32B in November—an Apache 2.0 licensed model!
247
+
248
+
249
+ I can now run a GPT-4 class model on my laptop talks about running Meta’s Llama
250
+ 3.3 70B (released in December)'
251
+ - 'On the other hand, as software engineers we are better placed to take advantage
252
+ of this than anyone else. We’ve all been given weird coding interns—we can use
253
+ our deep knowledge to prompt them to solve coding problems more effectively than
254
+ anyone else can.
255
+
256
+ The ethics of this space remain diabolically complex
257
+
258
+ In September last year Andy Baio and I produced the first major story on the unlicensed
259
+ training data behind Stable Diffusion.
260
+
261
+ Since then, almost every major LLM (and most of the image generation models) have
262
+ also been trained on unlicensed data.'
263
+ pipeline_tag: sentence-similarity
264
+ library_name: sentence-transformers
265
+ metrics:
266
+ - cosine_accuracy@1
267
+ - cosine_accuracy@3
268
+ - cosine_accuracy@5
269
+ - cosine_accuracy@10
270
+ - cosine_precision@1
271
+ - cosine_precision@3
272
+ - cosine_precision@5
273
+ - cosine_precision@10
274
+ - cosine_recall@1
275
+ - cosine_recall@3
276
+ - cosine_recall@5
277
+ - cosine_recall@10
278
+ - cosine_ndcg@10
279
+ - cosine_mrr@10
280
+ - cosine_map@100
281
+ model-index:
282
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
283
+ results:
284
+ - task:
285
+ type: information-retrieval
286
+ name: Information Retrieval
287
+ dataset:
288
+ name: Unknown
289
+ type: unknown
290
+ metrics:
291
+ - type: cosine_accuracy@1
292
+ value: 0.9166666666666666
293
+ name: Cosine Accuracy@1
294
+ - type: cosine_accuracy@3
295
+ value: 1.0
296
+ name: Cosine Accuracy@3
297
+ - type: cosine_accuracy@5
298
+ value: 1.0
299
+ name: Cosine Accuracy@5
300
+ - type: cosine_accuracy@10
301
+ value: 1.0
302
+ name: Cosine Accuracy@10
303
+ - type: cosine_precision@1
304
+ value: 0.9166666666666666
305
+ name: Cosine Precision@1
306
+ - type: cosine_precision@3
307
+ value: 0.3333333333333333
308
+ name: Cosine Precision@3
309
+ - type: cosine_precision@5
310
+ value: 0.20000000000000004
311
+ name: Cosine Precision@5
312
+ - type: cosine_precision@10
313
+ value: 0.10000000000000002
314
+ name: Cosine Precision@10
315
+ - type: cosine_recall@1
316
+ value: 0.9166666666666666
317
+ name: Cosine Recall@1
318
+ - type: cosine_recall@3
319
+ value: 1.0
320
+ name: Cosine Recall@3
321
+ - type: cosine_recall@5
322
+ value: 1.0
323
+ name: Cosine Recall@5
324
+ - type: cosine_recall@10
325
+ value: 1.0
326
+ name: Cosine Recall@10
327
+ - type: cosine_ndcg@10
328
+ value: 0.9692441461309548
329
+ name: Cosine Ndcg@10
330
+ - type: cosine_mrr@10
331
+ value: 0.9583333333333334
332
+ name: Cosine Mrr@10
333
+ - type: cosine_map@100
334
+ value: 0.9583333333333334
335
+ name: Cosine Map@100
336
+ ---
337
+
338
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
339
+
340
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
341
+
342
+ ## Model Details
343
+
344
+ ### Model Description
345
+ - **Model Type:** Sentence Transformer
346
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
347
+ - **Maximum Sequence Length:** 512 tokens
348
+ - **Output Dimensionality:** 1024 dimensions
349
+ - **Similarity Function:** Cosine Similarity
350
+ <!-- - **Training Dataset:** Unknown -->
351
+ <!-- - **Language:** Unknown -->
352
+ <!-- - **License:** Unknown -->
353
+
354
+ ### Model Sources
355
+
356
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
357
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
358
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
359
+
360
+ ### Full Model Architecture
361
+
362
+ ```
363
+ SentenceTransformer(
364
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
365
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
366
+ (2): Normalize()
367
+ )
368
+ ```
369
+
370
+ ## Usage
371
+
372
+ ### Direct Usage (Sentence Transformers)
373
+
374
+ First install the Sentence Transformers library:
375
+
376
+ ```bash
377
+ pip install -U sentence-transformers
378
+ ```
379
+
380
+ Then you can load this model and run inference.
381
+ ```python
382
+ from sentence_transformers import SentenceTransformer
383
+
384
+ # Download from the 🤗 Hub
385
+ model = SentenceTransformer("drewgenai/legal-ft-2")
386
+ # Run inference
387
+ sentences = [
388
+ 'How does the author compare a prompt without evals, models, and UX to an ASML machine?',
389
+ 'When @v0 first came out we were paranoid about protecting the prompt with all kinds of pre and post processing complexity.\nWe completely pivoted to let it rip. A prompt without the evals, models, and especially UX is like getting a broken ASML machine without a manual',
390
+ 'On the other hand, as software engineers we are better placed to take advantage of this than anyone else. We’ve all been given weird coding interns—we can use our deep knowledge to prompt them to solve coding problems more effectively than anyone else can.\nThe ethics of this space remain diabolically complex\nIn September last year Andy Baio and I produced the first major story on the unlicensed training data behind Stable Diffusion.\nSince then, almost every major LLM (and most of the image generation models) have also been trained on unlicensed data.',
391
+ ]
392
+ embeddings = model.encode(sentences)
393
+ print(embeddings.shape)
394
+ # [3, 1024]
395
+
396
+ # Get the similarity scores for the embeddings
397
+ similarities = model.similarity(embeddings, embeddings)
398
+ print(similarities.shape)
399
+ # [3, 3]
400
+ ```
401
+
402
+ <!--
403
+ ### Direct Usage (Transformers)
404
+
405
+ <details><summary>Click to see the direct usage in Transformers</summary>
406
+
407
+ </details>
408
+ -->
409
+
410
+ <!--
411
+ ### Downstream Usage (Sentence Transformers)
412
+
413
+ You can finetune this model on your own dataset.
414
+
415
+ <details><summary>Click to expand</summary>
416
+
417
+ </details>
418
+ -->
419
+
420
+ <!--
421
+ ### Out-of-Scope Use
422
+
423
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
424
+ -->
425
+
426
+ ## Evaluation
427
+
428
+ ### Metrics
429
+
430
+ #### Information Retrieval
431
+
432
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
433
+
434
+ | Metric | Value |
435
+ |:--------------------|:-----------|
436
+ | cosine_accuracy@1 | 0.9167 |
437
+ | cosine_accuracy@3 | 1.0 |
438
+ | cosine_accuracy@5 | 1.0 |
439
+ | cosine_accuracy@10 | 1.0 |
440
+ | cosine_precision@1 | 0.9167 |
441
+ | cosine_precision@3 | 0.3333 |
442
+ | cosine_precision@5 | 0.2 |
443
+ | cosine_precision@10 | 0.1 |
444
+ | cosine_recall@1 | 0.9167 |
445
+ | cosine_recall@3 | 1.0 |
446
+ | cosine_recall@5 | 1.0 |
447
+ | cosine_recall@10 | 1.0 |
448
+ | **cosine_ndcg@10** | **0.9692** |
449
+ | cosine_mrr@10 | 0.9583 |
450
+ | cosine_map@100 | 0.9583 |
451
+
452
+ <!--
453
+ ## Bias, Risks and Limitations
454
+
455
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
456
+ -->
457
+
458
+ <!--
459
+ ### Recommendations
460
+
461
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
462
+ -->
463
+
464
+ ## Training Details
465
+
466
+ ### Training Dataset
467
+
468
+ #### Unnamed Dataset
469
+
470
+ * Size: 156 training samples
471
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
472
+ * Approximate statistics based on the first 156 samples:
473
+ | | sentence_0 | sentence_1 |
474
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
475
+ | type | string | string |
476
+ | details | <ul><li>min: 11 tokens</li><li>mean: 20.29 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 134.95 tokens</li><li>max: 214 tokens</li></ul> |
477
+ * Samples:
478
+ | sentence_0 | sentence_1 |
479
+ |:---------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
480
+ | <code>What are some examples of programming languages mentioned in the context?</code> | <code>If you think about what they do, this isn’t such a big surprise. The grammar rules of programming languages like Python and JavaScript are massively less complicated than the grammar of Chinese, Spanish or English.<br>It’s still astonishing to me how effective they are though.<br>One of the great weaknesses of LLMs is their tendency to hallucinate—to imagine things that don’t correspond to reality. You would expect this to be a particularly bad problem for code—if an LLM hallucinates a method that doesn’t exist, the code should be useless.</code> |
481
+ | <code>What is one of the major weaknesses of LLMs as described in the context?</code> | <code>If you think about what they do, this isn’t such a big surprise. The grammar rules of programming languages like Python and JavaScript are massively less complicated than the grammar of Chinese, Spanish or English.<br>It’s still astonishing to me how effective they are though.<br>One of the great weaknesses of LLMs is their tendency to hallucinate—to imagine things that don’t correspond to reality. You would expect this to be a particularly bad problem for code—if an LLM hallucinates a method that doesn’t exist, the code should be useless.</code> |
482
+ | <code>What is the significance of prompt engineering in DALL-E 3?</code> | <code>Now add a walrus: Prompt engineering in DALL-E 3<br>32.8k<br>41.2k<br><br><br>Web LLM runs the vicuna-7b Large Language Model entirely in your browser, and it’s very impressive<br>32.5k<br>38.2k<br><br><br>ChatGPT can’t access the internet, even though it really looks like it can<br>30.5k<br>34.2k<br><br><br>Stanford Alpaca, and the acceleration of on-device large language model development<br>29.7k<br>35.7k<br><br><br>Run Llama 2 on your own Mac using LLM and Homebrew<br>27.9k<br>33.6k<br><br><br>Midjourney 5.1<br>26.7k<br>33.4k<br><br><br>Think of language models like ChatGPT as a “calculator for words”<br>25k<br>31.8k<br><br><br>Multi-modal prompt injection image attacks against GPT-4V<br>23.7k<br>27.4k</code> |
483
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
484
+ ```json
485
+ {
486
+ "loss": "MultipleNegativesRankingLoss",
487
+ "matryoshka_dims": [
488
+ 768,
489
+ 512,
490
+ 256,
491
+ 128,
492
+ 64
493
+ ],
494
+ "matryoshka_weights": [
495
+ 1,
496
+ 1,
497
+ 1,
498
+ 1,
499
+ 1
500
+ ],
501
+ "n_dims_per_step": -1
502
+ }
503
+ ```
504
+
505
+ ### Training Hyperparameters
506
+ #### Non-Default Hyperparameters
507
+
508
+ - `eval_strategy`: steps
509
+ - `num_train_epochs`: 10
510
+ - `multi_dataset_batch_sampler`: round_robin
511
+
512
+ #### All Hyperparameters
513
+ <details><summary>Click to expand</summary>
514
+
515
+ - `overwrite_output_dir`: False
516
+ - `do_predict`: False
517
+ - `eval_strategy`: steps
518
+ - `prediction_loss_only`: True
519
+ - `per_device_train_batch_size`: 8
520
+ - `per_device_eval_batch_size`: 8
521
+ - `per_gpu_train_batch_size`: None
522
+ - `per_gpu_eval_batch_size`: None
523
+ - `gradient_accumulation_steps`: 1
524
+ - `eval_accumulation_steps`: None
525
+ - `torch_empty_cache_steps`: None
526
+ - `learning_rate`: 5e-05
527
+ - `weight_decay`: 0.0
528
+ - `adam_beta1`: 0.9
529
+ - `adam_beta2`: 0.999
530
+ - `adam_epsilon`: 1e-08
531
+ - `max_grad_norm`: 1
532
+ - `num_train_epochs`: 10
533
+ - `max_steps`: -1
534
+ - `lr_scheduler_type`: linear
535
+ - `lr_scheduler_kwargs`: {}
536
+ - `warmup_ratio`: 0.0
537
+ - `warmup_steps`: 0
538
+ - `log_level`: passive
539
+ - `log_level_replica`: warning
540
+ - `log_on_each_node`: True
541
+ - `logging_nan_inf_filter`: True
542
+ - `save_safetensors`: True
543
+ - `save_on_each_node`: False
544
+ - `save_only_model`: False
545
+ - `restore_callback_states_from_checkpoint`: False
546
+ - `no_cuda`: False
547
+ - `use_cpu`: False
548
+ - `use_mps_device`: False
549
+ - `seed`: 42
550
+ - `data_seed`: None
551
+ - `jit_mode_eval`: False
552
+ - `use_ipex`: False
553
+ - `bf16`: False
554
+ - `fp16`: False
555
+ - `fp16_opt_level`: O1
556
+ - `half_precision_backend`: auto
557
+ - `bf16_full_eval`: False
558
+ - `fp16_full_eval`: False
559
+ - `tf32`: None
560
+ - `local_rank`: 0
561
+ - `ddp_backend`: None
562
+ - `tpu_num_cores`: None
563
+ - `tpu_metrics_debug`: False
564
+ - `debug`: []
565
+ - `dataloader_drop_last`: False
566
+ - `dataloader_num_workers`: 0
567
+ - `dataloader_prefetch_factor`: None
568
+ - `past_index`: -1
569
+ - `disable_tqdm`: False
570
+ - `remove_unused_columns`: True
571
+ - `label_names`: None
572
+ - `load_best_model_at_end`: False
573
+ - `ignore_data_skip`: False
574
+ - `fsdp`: []
575
+ - `fsdp_min_num_params`: 0
576
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
577
+ - `fsdp_transformer_layer_cls_to_wrap`: None
578
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
579
+ - `deepspeed`: None
580
+ - `label_smoothing_factor`: 0.0
581
+ - `optim`: adamw_torch
582
+ - `optim_args`: None
583
+ - `adafactor`: False
584
+ - `group_by_length`: False
585
+ - `length_column_name`: length
586
+ - `ddp_find_unused_parameters`: None
587
+ - `ddp_bucket_cap_mb`: None
588
+ - `ddp_broadcast_buffers`: False
589
+ - `dataloader_pin_memory`: True
590
+ - `dataloader_persistent_workers`: False
591
+ - `skip_memory_metrics`: True
592
+ - `use_legacy_prediction_loop`: False
593
+ - `push_to_hub`: False
594
+ - `resume_from_checkpoint`: None
595
+ - `hub_model_id`: None
596
+ - `hub_strategy`: every_save
597
+ - `hub_private_repo`: None
598
+ - `hub_always_push`: False
599
+ - `gradient_checkpointing`: False
600
+ - `gradient_checkpointing_kwargs`: None
601
+ - `include_inputs_for_metrics`: False
602
+ - `include_for_metrics`: []
603
+ - `eval_do_concat_batches`: True
604
+ - `fp16_backend`: auto
605
+ - `push_to_hub_model_id`: None
606
+ - `push_to_hub_organization`: None
607
+ - `mp_parameters`:
608
+ - `auto_find_batch_size`: False
609
+ - `full_determinism`: False
610
+ - `torchdynamo`: None
611
+ - `ray_scope`: last
612
+ - `ddp_timeout`: 1800
613
+ - `torch_compile`: False
614
+ - `torch_compile_backend`: None
615
+ - `torch_compile_mode`: None
616
+ - `dispatch_batches`: None
617
+ - `split_batches`: None
618
+ - `include_tokens_per_second`: False
619
+ - `include_num_input_tokens_seen`: False
620
+ - `neftune_noise_alpha`: None
621
+ - `optim_target_modules`: None
622
+ - `batch_eval_metrics`: False
623
+ - `eval_on_start`: False
624
+ - `use_liger_kernel`: False
625
+ - `eval_use_gather_object`: False
626
+ - `average_tokens_across_devices`: False
627
+ - `prompts`: None
628
+ - `batch_sampler`: batch_sampler
629
+ - `multi_dataset_batch_sampler`: round_robin
630
+
631
+ </details>
632
+
633
+ ### Training Logs
634
+ | Epoch | Step | cosine_ndcg@10 |
635
+ |:-----:|:----:|:--------------:|
636
+ | 1.0 | 20 | 0.9638 |
637
+ | 2.0 | 40 | 0.9539 |
638
+ | 2.5 | 50 | 0.9539 |
639
+ | 3.0 | 60 | 0.9539 |
640
+ | 4.0 | 80 | 0.9692 |
641
+ | 5.0 | 100 | 0.9692 |
642
+ | 6.0 | 120 | 0.9692 |
643
+ | 7.0 | 140 | 0.9692 |
644
+ | 7.5 | 150 | 0.9692 |
645
+ | 8.0 | 160 | 0.9692 |
646
+ | 9.0 | 180 | 0.9692 |
647
+ | 10.0 | 200 | 0.9692 |
648
+
649
+
650
+ ### Framework Versions
651
+ - Python: 3.13.1
652
+ - Sentence Transformers: 3.4.1
653
+ - Transformers: 4.48.3
654
+ - PyTorch: 2.6.0+cu124
655
+ - Accelerate: 1.3.0
656
+ - Datasets: 3.2.0
657
+ - Tokenizers: 0.21.0
658
+
659
+ ## Citation
660
+
661
+ ### BibTeX
662
+
663
+ #### Sentence Transformers
664
+ ```bibtex
665
+ @inproceedings{reimers-2019-sentence-bert,
666
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
667
+ author = "Reimers, Nils and Gurevych, Iryna",
668
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
669
+ month = "11",
670
+ year = "2019",
671
+ publisher = "Association for Computational Linguistics",
672
+ url = "https://arxiv.org/abs/1908.10084",
673
+ }
674
+ ```
675
+
676
+ #### MatryoshkaLoss
677
+ ```bibtex
678
+ @misc{kusupati2024matryoshka,
679
+ title={Matryoshka Representation Learning},
680
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
681
+ year={2024},
682
+ eprint={2205.13147},
683
+ archivePrefix={arXiv},
684
+ primaryClass={cs.LG}
685
+ }
686
+ ```
687
+
688
+ #### MultipleNegativesRankingLoss
689
+ ```bibtex
690
+ @misc{henderson2017efficient,
691
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
692
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
693
+ year={2017},
694
+ eprint={1705.00652},
695
+ archivePrefix={arXiv},
696
+ primaryClass={cs.CL}
697
+ }
698
+ ```
699
+
700
+ <!--
701
+ ## Glossary
702
+
703
+ *Clearly define terms in order to be accessible across audiences.*
704
+ -->
705
+
706
+ <!--
707
+ ## Model Card Authors
708
+
709
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
710
+ -->
711
+
712
+ <!--
713
+ ## Model Card Contact
714
+
715
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
716
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53ef0dbee91aceb5c59218b685b8eede8e81ac488a48f0d06dea3c6b8ffad93f
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff