chelleboyer commited on
Commit
f85f447
·
verified ·
1 Parent(s): 0c06ff4

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,825 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:1334
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: How can the quality of reference data constrain outcomes?
13
+ sentences:
14
+ - 'Dong et al. (2024a)
15
+
16
+
17
+ Qingxiu Dong, Li Dong, Xingxing Zhang, Zhifang Sui, and Furu Wei. 2024a.
18
+
19
+
20
+
21
+ Self-Boosting Large Language Models with Synthetic Preference Data.
22
+
23
+
24
+
25
+ arXiv preprint arXiv:2410.06961 (2024).
26
+
27
+
28
+
29
+
30
+
31
+
32
+
33
+ Dong et al. (2022)
34
+
35
+
36
+ Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Jingyuan Ma, Rui Li, Heming Xia, Jingjing
37
+ Xu, Zhiyong Wu, Tianyu Liu, et al. 2022.
38
+
39
+
40
+
41
+ A survey on in-context learning.
42
+
43
+
44
+
45
+ arXiv preprint arXiv:2301.00234 (2022).
46
+
47
+
48
+
49
+
50
+
51
+
52
+
53
+ Dong et al. (2024b)
54
+
55
+
56
+ Yijiang River Dong, Tiancheng Hu, and Nigel Collier. 2024b.
57
+
58
+
59
+
60
+ Can LLM be a Personalized Judge?
61
+
62
+
63
+
64
+ arXiv preprint arXiv:2406.11657 (2024).
65
+
66
+
67
+
68
+
69
+
70
+
71
+
72
+ Dorner et al. (2024)
73
+
74
+
75
+ Florian E. Dorner, Vivian Y. Nastl, and Moritz Hardt. 2024.'
76
+ - 'Journal of Natural Language Processing 30, 1 (2023), 243–249.
77
+
78
+
79
+
80
+
81
+
82
+
83
+
84
+ Chen et al. (2024e)
85
+
86
+
87
+ Junjie Chen, Weihang Su, Zhumin Chu, Haitao Li, Qinyao Ai, Yiqun Liu, Min Zhang,
88
+ and Shaoping Ma. 2024e.
89
+
90
+
91
+
92
+ An Automatic and Cost-Efficient Peer-Review Framework for Language Generation
93
+ Evaluation.
94
+
95
+
96
+
97
+
98
+
99
+ arXiv:2410.12265 [cs.CL]
100
+
101
+
102
+ https://arxiv.org/abs/2410.12265
103
+
104
+
105
+
106
+
107
+ Chen et al. (2023c)
108
+
109
+
110
+ Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan O Arik, Tomas Pfister, and
111
+ Somesh Jha. 2023c.
112
+
113
+
114
+
115
+ Adaptation with self-evaluation to improve selective prediction in llms.
116
+
117
+
118
+
119
+ arXiv preprint arXiv:2310.11689 (2023).
120
+
121
+
122
+
123
+
124
+
125
+
126
+
127
+ Chen et al. (2024d)'
128
+ - may be constrained by the quality and variety of the reference data.
129
+ - source_sentence: What are the key contributions of Shen and Wan (2023) in the field
130
+ of reference-free evaluation?
131
+ sentences:
132
+ - 'Li et al. (2023c)
133
+
134
+
135
+ Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. 2023c.
136
+
137
+
138
+
139
+ Generative judge for evaluating alignment.
140
+
141
+
142
+
143
+ arXiv preprint arXiv:2310.05470 (2023).
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+ Li et al. (2023a)
152
+
153
+
154
+ Qintong Li, Leyang Cui, Lingpeng Kong, and Wei Bi. 2023a.
155
+
156
+
157
+
158
+ Collaborative Evaluation: Exploring the Synergy of Large Language Models and Humans
159
+ for Open-ended Generation Evaluation.
160
+
161
+
162
+
163
+ arXiv preprint arXiv:2310.19740 (2023).
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+ Li et al. (2023b)
172
+
173
+
174
+ Ruosen Li, Teerth Patel, and Xinya Du. 2023b.
175
+
176
+
177
+
178
+ Prd: Peer rank and discussion improve large language model based evaluations.
179
+
180
+
181
+
182
+ arXiv preprint arXiv:2307.02762 (2023).
183
+
184
+
185
+
186
+
187
+
188
+
189
+
190
+ Li et al. (2017)'
191
+ - 'Springer.
192
+
193
+
194
+
195
+
196
+
197
+
198
+
199
+ Tyen et al. (2023)
200
+
201
+
202
+ Gladys Tyen, Hassan Mansoor, Peter Chen, Tony Mak, and Victor Cărbune. 2023.
203
+
204
+
205
+
206
+ LLMs cannot find reasoning errors, but can correct them!
207
+
208
+
209
+
210
+ arXiv preprint arXiv:2311.08516 (2023).
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+ Valmeekam et al. (2023)
219
+
220
+
221
+ Karthik Valmeekam, Matthew Marquez, and Subbarao Kambhampati. 2023.
222
+
223
+
224
+
225
+ Can large language models really improve by self-critiquing their own plans?
226
+
227
+
228
+
229
+ arXiv preprint arXiv:2310.08118 (2023).
230
+
231
+
232
+
233
+
234
+
235
+
236
+
237
+ Verga et al. (2024)
238
+
239
+
240
+ Pat Verga, Sebastian Hofstatter, Sophia Althammer, Yixuan Su, Aleksandra Piktus,
241
+ Arkady Arkhangorodsky, Minjie Xu, Naomi White, and Patrick Lewis. 2024.'
242
+ - 'Reference-Free Evaluation (Shen and Wan, 2023; Zheng et al., 2023a; He et al.,
243
+ 2023b):'
244
+ - source_sentence: What role do LLM judges play in the iterative refinement process
245
+ described in the context?
246
+ sentences:
247
+ - "[Biases (§7.1)\n[Presentation-Related \n(§7.1.1)\n[Position bias (Blunch, 1984;\
248
+ \ Raghubir and Valenzuela, 2006; Ko et al., 2020; Wang et al., 2018; LLMS, 2025;\
249
+ \ Zheng et al., 2023a; Chen et al., 2024a; Wang et al., 2023b; Li et al., 2023c;\
250
+ \ Zheng et al., 2023b; Raina et al., 2024; Hou et al., 2024; Li et al., 2023d,\
251
+ \ b; Khan et al., 2024; Zhou et al., 2023a; Li et al., 2024a; Shi et al., 2024a;\
252
+ \ Stureborg et al., 2024; Zhao et al., 2024a), Verbosity bias (Nasrabadi, 2024;\
253
+ \ Ye et al., 2024b, a), leaf, text width=41em] ]\n[Social-Related (§7.1.2)"
254
+ - '3.2.3. Feedback for Refinement
255
+
256
+
257
+ After receiving the initial response, LLM judges provide actionable feedback to
258
+ iteratively improve output quality. By analyzing the response based on specific
259
+ task criteria, such as accuracy, coherence, or creativity, the LLM can identify
260
+ weaknesses in the output and offer suggestions for improvement. This iterative
261
+ refinement process plays a crucial role in applications that require adaptability (Madaan
262
+ et al., 2024; Paul et al., 2023; Chen et al., 2023a; Xu et al., 2023c; Huang et al.,
263
+ 2023).'
264
+ - 'Gopalakrishnan et al. (2023)
265
+
266
+
267
+ Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev
268
+ Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2023.
269
+
270
+
271
+
272
+ Topical-chat: Towards knowledge-grounded open-domain conversations.
273
+
274
+
275
+
276
+ arXiv preprint arXiv:2308.11995 (2023).
277
+
278
+
279
+
280
+
281
+
282
+
283
+
284
+ Guan et al. (2021)
285
+
286
+
287
+ Jian Guan, Zhexin Zhang, Zhuoer Feng, Zitao Liu, Wenbiao Ding, Xiaoxi Mao, Changjie
288
+ Fan, and Minlie Huang. 2021.
289
+
290
+
291
+
292
+ OpenMEVA: A benchmark for evaluating open-ended story generation metrics.
293
+
294
+
295
+
296
+ arXiv preprint arXiv:2105.08920 (2021).
297
+
298
+
299
+
300
+
301
+
302
+
303
+
304
+ Guo et al. (2024)'
305
+ - source_sentence: In what ways does the LLMAAA approach help mitigate the effects
306
+ of noisy labels?
307
+ sentences:
308
+ - '6.2. Metric
309
+
310
+
311
+ The evaluation of LLMs-as-Judges models centers around assessing the extent to
312
+ which the model’s judgments align with human evaluations, which are typically
313
+ considered the benchmark for quality. Given the complexity and subjectivity of
314
+ many evaluation tasks, achieving high agreement with human ratings is a key indicator
315
+ of the LLM’s performance. To quantify this agreement, a range of statistical metrics
316
+ is employed. Below, we outline these metrics and their applications in evaluating
317
+ LLMs-as-Judges models.
318
+
319
+
320
+
321
+
322
+ 6.2.1. Accuracy'
323
+ - Current LLM-as-Judge systems primarily focus on processing textual data, with
324
+ limited attention to integrating other modalities like images, audio, and video.
325
+ This single-modal approach falls short in complex scenarios requiring multimodal
326
+ analysis, such as combining visual and textual information in medical assessments.
327
+ Future systems should develop cross-modal integration capabilities to process
328
+ and evaluate multimodal data simultaneously (Chen et al., 2024b). Leveraging cross-modal
329
+ validation can enhance evaluation accuracy. Key research areas include efficient
330
+ multimodal feature extraction, integration, and the design of unified frameworks
331
+ for more comprehensive and precise evaluations.
332
+ - Additionally, the LLMAAA (Zhang et al., 2023a) framework incorporates an active
333
+ learning strategy to efficiently select high-information samples for annotation,
334
+ thereby mitigating the effects of noisy labels and reducing the reliance on costly
335
+ human annotation. These approach not only enhance the performance of task-specific
336
+ models but also offer new perspectives on the efficient application of LLMs in
337
+ annotation workflows.
338
+ - source_sentence: What metrics does the LLMS (2025) framework introduce to investigate
339
+ position bias in pairwise comparisons?
340
+ sentences:
341
+ - Overconfidence bias (Khan et al., 2024; Jung et al., 2024) in the context of LLMs-as-judges
342
+ refers to the tendency of models to exhibit an inflated level of confidence in
343
+ their judgments, often resulting in overly assertive evaluations that may not
344
+ accurately reflect the true reliability of the answer. This bias is particularly
345
+ concerning in evaluative contexts, as it can lead LLMs-as-judges to overstate
346
+ the correctness of certain outputs, compromising the objectivity and dependability
347
+ of assessments.
348
+ - 'Recent studies have further examined position bias in the LLMs-as-judges context.
349
+
350
+ For instance, a framework (LLMS, 2025) is proposed to investigate position bias
351
+ in pairwise comparisons, introducing metrics such as repetition stability, position
352
+ consistency, and preference fairness to better understand how positions affect
353
+ LLM judgments.
354
+
355
+ Another study (Zheng et al., 2023a) explores the limitations of LLMs-as-judges,
356
+ including position biases, and verifies agreement between LLM judgments and human
357
+ preferences across multiple benchmarks.
358
+
359
+ These findings underscore the need for robust debiasing strategies to enhance
360
+ the fairness and reliableness of LLMs-as-judges.'
361
+ - The search task is a fundamental component of information retrieval (IR), focusing
362
+ on identifying the most relevant documents from extensive text collections based
363
+ on user queries. Traditionally, relevance assessments in search tasks have been
364
+ conducted by human annotators following established guidelines. However, recent
365
+ advances in large language models (LLMs) have opened up new opportunities for
366
+ utilizing these models as evaluators, offering an automated and scalable approach
367
+ to relevance assessment.
368
+ pipeline_tag: sentence-similarity
369
+ library_name: sentence-transformers
370
+ metrics:
371
+ - cosine_accuracy@1
372
+ - cosine_accuracy@3
373
+ - cosine_accuracy@5
374
+ - cosine_accuracy@10
375
+ - cosine_precision@1
376
+ - cosine_precision@3
377
+ - cosine_precision@5
378
+ - cosine_precision@10
379
+ - cosine_recall@1
380
+ - cosine_recall@3
381
+ - cosine_recall@5
382
+ - cosine_recall@10
383
+ - cosine_ndcg@10
384
+ - cosine_mrr@10
385
+ - cosine_map@100
386
+ model-index:
387
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
388
+ results:
389
+ - task:
390
+ type: information-retrieval
391
+ name: Information Retrieval
392
+ dataset:
393
+ name: Unknown
394
+ type: unknown
395
+ metrics:
396
+ - type: cosine_accuracy@1
397
+ value: 0.93
398
+ name: Cosine Accuracy@1
399
+ - type: cosine_accuracy@3
400
+ value: 0.99
401
+ name: Cosine Accuracy@3
402
+ - type: cosine_accuracy@5
403
+ value: 1.0
404
+ name: Cosine Accuracy@5
405
+ - type: cosine_accuracy@10
406
+ value: 1.0
407
+ name: Cosine Accuracy@10
408
+ - type: cosine_precision@1
409
+ value: 0.93
410
+ name: Cosine Precision@1
411
+ - type: cosine_precision@3
412
+ value: 0.33000000000000007
413
+ name: Cosine Precision@3
414
+ - type: cosine_precision@5
415
+ value: 0.19999999999999996
416
+ name: Cosine Precision@5
417
+ - type: cosine_precision@10
418
+ value: 0.09999999999999998
419
+ name: Cosine Precision@10
420
+ - type: cosine_recall@1
421
+ value: 0.93
422
+ name: Cosine Recall@1
423
+ - type: cosine_recall@3
424
+ value: 0.99
425
+ name: Cosine Recall@3
426
+ - type: cosine_recall@5
427
+ value: 1.0
428
+ name: Cosine Recall@5
429
+ - type: cosine_recall@10
430
+ value: 1.0
431
+ name: Cosine Recall@10
432
+ - type: cosine_ndcg@10
433
+ value: 0.9704150157509183
434
+ name: Cosine Ndcg@10
435
+ - type: cosine_mrr@10
436
+ value: 0.9603333333333333
437
+ name: Cosine Mrr@10
438
+ - type: cosine_map@100
439
+ value: 0.9603333333333333
440
+ name: Cosine Map@100
441
+ ---
442
+
443
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
444
+
445
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
446
+
447
+ ## Model Details
448
+
449
+ ### Model Description
450
+ - **Model Type:** Sentence Transformer
451
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
452
+ - **Maximum Sequence Length:** 512 tokens
453
+ - **Output Dimensionality:** 1024 dimensions
454
+ - **Similarity Function:** Cosine Similarity
455
+ <!-- - **Training Dataset:** Unknown -->
456
+ <!-- - **Language:** Unknown -->
457
+ <!-- - **License:** Unknown -->
458
+
459
+ ### Model Sources
460
+
461
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
462
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
463
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
464
+
465
+ ### Full Model Architecture
466
+
467
+ ```
468
+ SentenceTransformer(
469
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
470
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
471
+ (2): Normalize()
472
+ )
473
+ ```
474
+
475
+ ## Usage
476
+
477
+ ### Direct Usage (Sentence Transformers)
478
+
479
+ First install the Sentence Transformers library:
480
+
481
+ ```bash
482
+ pip install -U sentence-transformers
483
+ ```
484
+
485
+ Then you can load this model and run inference.
486
+ ```python
487
+ from sentence_transformers import SentenceTransformer
488
+
489
+ # Download from the 🤗 Hub
490
+ model = SentenceTransformer("chelleboyer/llm-mm-good-309e6f79-505b-4c23-8452-37cc854e67df")
491
+ # Run inference
492
+ sentences = [
493
+ 'What metrics does the LLMS (2025) framework introduce to investigate position bias in pairwise comparisons?',
494
+ 'Recent studies have further examined position bias in the LLMs-as-judges context.\nFor instance, a framework\xa0(LLMS, 2025) is proposed to investigate position bias in pairwise comparisons, introducing metrics such as repetition stability, position consistency, and preference fairness to better understand how positions affect LLM judgments.\nAnother study\xa0(Zheng et\xa0al., 2023a) explores the limitations of LLMs-as-judges, including position biases, and verifies agreement between LLM judgments and human preferences across multiple benchmarks.\nThese findings underscore the need for robust debiasing strategies to enhance the fairness and reliableness of LLMs-as-judges.',
495
+ 'Overconfidence bias\xa0(Khan et\xa0al., 2024; Jung et\xa0al., 2024) in the context of LLMs-as-judges refers to the tendency of models to exhibit an inflated level of confidence in their judgments, often resulting in overly assertive evaluations that may not accurately reflect the true reliability of the answer. This bias is particularly concerning in evaluative contexts, as it can lead LLMs-as-judges to overstate the correctness of certain outputs, compromising the objectivity and dependability of assessments.',
496
+ ]
497
+ embeddings = model.encode(sentences)
498
+ print(embeddings.shape)
499
+ # [3, 1024]
500
+
501
+ # Get the similarity scores for the embeddings
502
+ similarities = model.similarity(embeddings, embeddings)
503
+ print(similarities.shape)
504
+ # [3, 3]
505
+ ```
506
+
507
+ <!--
508
+ ### Direct Usage (Transformers)
509
+
510
+ <details><summary>Click to see the direct usage in Transformers</summary>
511
+
512
+ </details>
513
+ -->
514
+
515
+ <!--
516
+ ### Downstream Usage (Sentence Transformers)
517
+
518
+ You can finetune this model on your own dataset.
519
+
520
+ <details><summary>Click to expand</summary>
521
+
522
+ </details>
523
+ -->
524
+
525
+ <!--
526
+ ### Out-of-Scope Use
527
+
528
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
529
+ -->
530
+
531
+ ## Evaluation
532
+
533
+ ### Metrics
534
+
535
+ #### Information Retrieval
536
+
537
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
538
+
539
+ | Metric | Value |
540
+ |:--------------------|:-----------|
541
+ | cosine_accuracy@1 | 0.93 |
542
+ | cosine_accuracy@3 | 0.99 |
543
+ | cosine_accuracy@5 | 1.0 |
544
+ | cosine_accuracy@10 | 1.0 |
545
+ | cosine_precision@1 | 0.93 |
546
+ | cosine_precision@3 | 0.33 |
547
+ | cosine_precision@5 | 0.2 |
548
+ | cosine_precision@10 | 0.1 |
549
+ | cosine_recall@1 | 0.93 |
550
+ | cosine_recall@3 | 0.99 |
551
+ | cosine_recall@5 | 1.0 |
552
+ | cosine_recall@10 | 1.0 |
553
+ | **cosine_ndcg@10** | **0.9704** |
554
+ | cosine_mrr@10 | 0.9603 |
555
+ | cosine_map@100 | 0.9603 |
556
+
557
+ <!--
558
+ ## Bias, Risks and Limitations
559
+
560
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
561
+ -->
562
+
563
+ <!--
564
+ ### Recommendations
565
+
566
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
567
+ -->
568
+
569
+ ## Training Details
570
+
571
+ ### Training Dataset
572
+
573
+ #### Unnamed Dataset
574
+
575
+ * Size: 1,334 training samples
576
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
577
+ * Approximate statistics based on the first 1000 samples:
578
+ | | sentence_0 | sentence_1 |
579
+ |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
580
+ | type | string | string |
581
+ | details | <ul><li>min: 5 tokens</li><li>mean: 23.12 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 132.04 tokens</li><li>max: 306 tokens</li></ul> |
582
+ * Samples:
583
+ | sentence_0 | sentence_1 |
584
+ |:------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
585
+ | <code>What are the main components of the evaluation function \( E \) as described in the preliminaries section?</code> | <code>LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>1 Introduction<br><br>2 PRELIMINARIES<br><br>2.1 Evaluation Function E𝐸Eitalic_E<br><br>2.2 Evaluation Input<br><br>2.2.1 Evaluation Type 𝒯𝒯\mathcal{T}caligraphic_T<br>2.2.2 Evaluation Criteria 𝒞𝒞\mathcal{C}caligraphic_C.<br>2.2.3 Evaluation References ℛℛ\mathcal{R}caligraphic_R.<br><br><br>2.3 Evaluation Output<br><br><br><br>3 Functionality<br><br><br>3.1 Performance Evaluation<br><br>3.1.1 Responses Evaluation<br>3.1.2 Model Evaluation<br><br><br><br>3.2 Model Enhancement<br><br>3.2.1 Reward Modeling During Training<br>3.2.2 Acting as Verifier During Inference<br>3.2.3 Feedback for Refinement<br><br><br><br>3.3 Data Construction<br><br>3.3.1 Data Annotation<br>3.3.2 Data Synthesize<br><br><br><br><br><br>4 Methodology</code> |
586
+ | <code>How do LLMs contribute to model enhancement according to the functionalities outlined in the survey?</code> | <code>LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>1 Introduction<br><br>2 PRELIMINARIES<br><br>2.1 Evaluation Function E𝐸Eitalic_E<br><br>2.2 Evaluation Input<br><br>2.2.1 Evaluation Type 𝒯𝒯\mathcal{T}caligraphic_T<br>2.2.2 Evaluation Criteria 𝒞𝒞\mathcal{C}caligraphic_C.<br>2.2.3 Evaluation References ℛℛ\mathcal{R}caligraphic_R.<br><br><br>2.3 Evaluation Output<br><br><br><br>3 Functionality<br><br><br>3.1 Performance Evaluation<br><br>3.1.1 Responses Evaluation<br>3.1.2 Model Evaluation<br><br><br><br>3.2 Model Enhancement<br><br>3.2.1 Reward Modeling During Training<br>3.2.2 Acting as Verifier During Inference<br>3.2.3 Feedback for Refinement<br><br><br><br>3.3 Data Construction<br><br>3.3.1 Data Annotation<br>3.3.2 Data Synthesize<br><br><br><br><br><br>4 Methodology</code> |
587
+ | <code>What are the different approaches discussed under the Single-LLM System methodology?</code> | <code>4 Methodology<br><br><br>4.1 Single-LLM System<br><br>4.1.1 Prompt-based<br>4.1.2 Tuning-based<br>4.1.3 Post-processing<br><br><br><br>4.2 Multi-LLM System<br><br>4.2.1 Communication<br>4.2.2 Aggregation<br><br><br>4.3 Human-AI Collaboration System<br><br><br><br>5 Application<br><br>5.1 General<br>5.2 Multimodal<br>5.3 Medical<br>5.4 Legal<br>5.5 Financial<br>5.6 Education<br>5.7 Information Retrieval<br><br>5.8 Others<br><br>5.8.1 Soft Engineering<br>5.8.2 Biology<br>5.8.3 Social Science<br><br><br><br><br><br>6 Meta-evaluation<br><br><br>6.1 Benchmarks<br><br>6.1.1 Code Generation<br>6.1.2 Machine Translation<br>6.1.3 Text Summarization<br>6.1.4 Dialogue Generation<br>6.1.5 Automatic Story Generation<br>6.1.6 Values Alignment<br>6.1.7 Recommendation<br>6.1.8 Search<br>6.1.9 Comprehensive Data<br><br><br><br>6.2 Metric</code> |
588
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
589
+ ```json
590
+ {
591
+ "loss": "MultipleNegativesRankingLoss",
592
+ "matryoshka_dims": [
593
+ 768,
594
+ 512,
595
+ 256,
596
+ 128,
597
+ 64
598
+ ],
599
+ "matryoshka_weights": [
600
+ 1,
601
+ 1,
602
+ 1,
603
+ 1,
604
+ 1
605
+ ],
606
+ "n_dims_per_step": -1
607
+ }
608
+ ```
609
+
610
+ ### Training Hyperparameters
611
+ #### Non-Default Hyperparameters
612
+
613
+ - `eval_strategy`: steps
614
+ - `per_device_train_batch_size`: 50
615
+ - `per_device_eval_batch_size`: 50
616
+ - `num_train_epochs`: 10
617
+ - `multi_dataset_batch_sampler`: round_robin
618
+
619
+ #### All Hyperparameters
620
+ <details><summary>Click to expand</summary>
621
+
622
+ - `overwrite_output_dir`: False
623
+ - `do_predict`: False
624
+ - `eval_strategy`: steps
625
+ - `prediction_loss_only`: True
626
+ - `per_device_train_batch_size`: 50
627
+ - `per_device_eval_batch_size`: 50
628
+ - `per_gpu_train_batch_size`: None
629
+ - `per_gpu_eval_batch_size`: None
630
+ - `gradient_accumulation_steps`: 1
631
+ - `eval_accumulation_steps`: None
632
+ - `torch_empty_cache_steps`: None
633
+ - `learning_rate`: 5e-05
634
+ - `weight_decay`: 0.0
635
+ - `adam_beta1`: 0.9
636
+ - `adam_beta2`: 0.999
637
+ - `adam_epsilon`: 1e-08
638
+ - `max_grad_norm`: 1
639
+ - `num_train_epochs`: 10
640
+ - `max_steps`: -1
641
+ - `lr_scheduler_type`: linear
642
+ - `lr_scheduler_kwargs`: {}
643
+ - `warmup_ratio`: 0.0
644
+ - `warmup_steps`: 0
645
+ - `log_level`: passive
646
+ - `log_level_replica`: warning
647
+ - `log_on_each_node`: True
648
+ - `logging_nan_inf_filter`: True
649
+ - `save_safetensors`: True
650
+ - `save_on_each_node`: False
651
+ - `save_only_model`: False
652
+ - `restore_callback_states_from_checkpoint`: False
653
+ - `no_cuda`: False
654
+ - `use_cpu`: False
655
+ - `use_mps_device`: False
656
+ - `seed`: 42
657
+ - `data_seed`: None
658
+ - `jit_mode_eval`: False
659
+ - `use_ipex`: False
660
+ - `bf16`: False
661
+ - `fp16`: False
662
+ - `fp16_opt_level`: O1
663
+ - `half_precision_backend`: auto
664
+ - `bf16_full_eval`: False
665
+ - `fp16_full_eval`: False
666
+ - `tf32`: None
667
+ - `local_rank`: 0
668
+ - `ddp_backend`: None
669
+ - `tpu_num_cores`: None
670
+ - `tpu_metrics_debug`: False
671
+ - `debug`: []
672
+ - `dataloader_drop_last`: False
673
+ - `dataloader_num_workers`: 0
674
+ - `dataloader_prefetch_factor`: None
675
+ - `past_index`: -1
676
+ - `disable_tqdm`: False
677
+ - `remove_unused_columns`: True
678
+ - `label_names`: None
679
+ - `load_best_model_at_end`: False
680
+ - `ignore_data_skip`: False
681
+ - `fsdp`: []
682
+ - `fsdp_min_num_params`: 0
683
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
684
+ - `tp_size`: 0
685
+ - `fsdp_transformer_layer_cls_to_wrap`: None
686
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
687
+ - `deepspeed`: None
688
+ - `label_smoothing_factor`: 0.0
689
+ - `optim`: adamw_torch
690
+ - `optim_args`: None
691
+ - `adafactor`: False
692
+ - `group_by_length`: False
693
+ - `length_column_name`: length
694
+ - `ddp_find_unused_parameters`: None
695
+ - `ddp_bucket_cap_mb`: None
696
+ - `ddp_broadcast_buffers`: False
697
+ - `dataloader_pin_memory`: True
698
+ - `dataloader_persistent_workers`: False
699
+ - `skip_memory_metrics`: True
700
+ - `use_legacy_prediction_loop`: False
701
+ - `push_to_hub`: False
702
+ - `resume_from_checkpoint`: None
703
+ - `hub_model_id`: None
704
+ - `hub_strategy`: every_save
705
+ - `hub_private_repo`: None
706
+ - `hub_always_push`: False
707
+ - `gradient_checkpointing`: False
708
+ - `gradient_checkpointing_kwargs`: None
709
+ - `include_inputs_for_metrics`: False
710
+ - `include_for_metrics`: []
711
+ - `eval_do_concat_batches`: True
712
+ - `fp16_backend`: auto
713
+ - `push_to_hub_model_id`: None
714
+ - `push_to_hub_organization`: None
715
+ - `mp_parameters`:
716
+ - `auto_find_batch_size`: False
717
+ - `full_determinism`: False
718
+ - `torchdynamo`: None
719
+ - `ray_scope`: last
720
+ - `ddp_timeout`: 1800
721
+ - `torch_compile`: False
722
+ - `torch_compile_backend`: None
723
+ - `torch_compile_mode`: None
724
+ - `include_tokens_per_second`: False
725
+ - `include_num_input_tokens_seen`: False
726
+ - `neftune_noise_alpha`: None
727
+ - `optim_target_modules`: None
728
+ - `batch_eval_metrics`: False
729
+ - `eval_on_start`: False
730
+ - `use_liger_kernel`: False
731
+ - `eval_use_gather_object`: False
732
+ - `average_tokens_across_devices`: False
733
+ - `prompts`: None
734
+ - `batch_sampler`: batch_sampler
735
+ - `multi_dataset_batch_sampler`: round_robin
736
+
737
+ </details>
738
+
739
+ ### Training Logs
740
+ | Epoch | Step | cosine_ndcg@10 |
741
+ |:------:|:----:|:--------------:|
742
+ | 1.0 | 27 | 0.9697 |
743
+ | 1.8519 | 50 | 0.9788 |
744
+ | 2.0 | 54 | 0.9775 |
745
+ | 3.0 | 81 | 0.9741 |
746
+ | 3.7037 | 100 | 0.9791 |
747
+ | 4.0 | 108 | 0.9741 |
748
+ | 5.0 | 135 | 0.9782 |
749
+ | 5.5556 | 150 | 0.9782 |
750
+ | 6.0 | 162 | 0.9782 |
751
+ | 7.0 | 189 | 0.9782 |
752
+ | 7.4074 | 200 | 0.9741 |
753
+ | 8.0 | 216 | 0.9741 |
754
+ | 9.0 | 243 | 0.9704 |
755
+ | 9.2593 | 250 | 0.9704 |
756
+ | 10.0 | 270 | 0.9704 |
757
+
758
+
759
+ ### Framework Versions
760
+ - Python: 3.11.12
761
+ - Sentence Transformers: 3.4.1
762
+ - Transformers: 4.51.3
763
+ - PyTorch: 2.6.0+cu124
764
+ - Accelerate: 1.6.0
765
+ - Datasets: 2.14.4
766
+ - Tokenizers: 0.21.1
767
+
768
+ ## Citation
769
+
770
+ ### BibTeX
771
+
772
+ #### Sentence Transformers
773
+ ```bibtex
774
+ @inproceedings{reimers-2019-sentence-bert,
775
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
776
+ author = "Reimers, Nils and Gurevych, Iryna",
777
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
778
+ month = "11",
779
+ year = "2019",
780
+ publisher = "Association for Computational Linguistics",
781
+ url = "https://arxiv.org/abs/1908.10084",
782
+ }
783
+ ```
784
+
785
+ #### MatryoshkaLoss
786
+ ```bibtex
787
+ @misc{kusupati2024matryoshka,
788
+ title={Matryoshka Representation Learning},
789
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
790
+ year={2024},
791
+ eprint={2205.13147},
792
+ archivePrefix={arXiv},
793
+ primaryClass={cs.LG}
794
+ }
795
+ ```
796
+
797
+ #### MultipleNegativesRankingLoss
798
+ ```bibtex
799
+ @misc{henderson2017efficient,
800
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
801
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
802
+ year={2017},
803
+ eprint={1705.00652},
804
+ archivePrefix={arXiv},
805
+ primaryClass={cs.CL}
806
+ }
807
+ ```
808
+
809
+ <!--
810
+ ## Glossary
811
+
812
+ *Clearly define terms in order to be accessible across audiences.*
813
+ -->
814
+
815
+ <!--
816
+ ## Model Card Authors
817
+
818
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
819
+ -->
820
+
821
+ <!--
822
+ ## Model Card Contact
823
+
824
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
825
+ -->
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 1024,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 4096,
12
+ "layer_norm_eps": 1e-12,
13
+ "max_position_embeddings": 512,
14
+ "model_type": "bert",
15
+ "num_attention_heads": 16,
16
+ "num_hidden_layers": 24,
17
+ "pad_token_id": 0,
18
+ "position_embedding_type": "absolute",
19
+ "torch_dtype": "float32",
20
+ "transformers_version": "4.51.3",
21
+ "type_vocab_size": 2,
22
+ "use_cache": true,
23
+ "vocab_size": 30522
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.51.3",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:69c327e3a11e713469a94c178c590f811aebf745157a0ef6234b13c4648c53da
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff