Nikita Martynov commited on
Commit
26fe676
·
1 Parent(s): 75aed9e
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .DS_Store
README.md CHANGED
@@ -1,3 +1,314 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - ru
5
+ base_model:
6
+ - t-tech/T-lite-it-1.0
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - pytorch
11
+ metrics:
12
+ - mae
13
+ - pearsonr
14
+ ---
15
+ # pollux-judge-7b-r
16
+
17
+ <!-- Provide a quick summary of what the model is/does. -->
18
+
19
+ ![banner](images/logo_pollux_horiz_short_WHITEBG.png)
20
+
21
+ pollux-judge-7b-r is a 7-billion parameter generative language model specifically designed to evaluate the quality of other language models' responses in Russian.
22
+ The model assesses answer quality given input instruction, specific criteria and rubrics, providing automated LLM performance evaluation for Russian-language tasks.
23
+
24
+ ## Model Details
25
+
26
+ ### Model Description
27
+
28
+ <!-- Provide a longer summary of what this model is. -->
29
+
30
+ pollux-judge-7b-r is an integral component of the POLLUX project, a comprehensive initiative dedicated to evaluating the generative capabilities of Large Language Models (LLMs).
31
+ At the heart of this project lies the [POLLUX dataset](https://huggingface.co/datasets/ai-forever/POLLUX), which introduces systematic taxonomies for both generative tasks and evaluation criteria, providing quantitative and qualitative assessments of responses from top-tier LLMs.
32
+
33
+ Built upon the [t-tech/T-lite-it-1.0](https://huggingface.co/t-tech/T-lite-it-1.0) architecture, pollux-judge-7b-r is a decoder-based 7 billion parameter model trained with a combination of Mean Square Error (for regression head) and Cross-Entropy (for language modeling head) objectives.
34
+ The model is designed to predict both numerical scores and detailed textual rationales with separate heads based on the original instruction, the LLM's response, specific evaluation criteria, scoring rubrics, and reference answers when available.
35
+
36
+ While the model is technically capable of processing any type of instruction and criterion when properly formatted, its training has been specifically optimized using the generative tasks and evaluation criteria derived from the taxonomies established within the [POLLUX dataset](https://huggingface.co/datasets/ai-forever/POLLUX).
37
+
38
+
39
+ - **Model type:** decoder
40
+ - **Language(s) (NLP):** Russian
41
+ - **License:** MIT
42
+ - **Finetuned from model:** [t-tech/T-lite-it-1.0](https://huggingface.co/t-tech/T-lite-it-1.0)
43
+
44
+ ### Model Sources
45
+
46
+ <!-- Provide the basic links for the model. -->
47
+
48
+ - **Repository:** [POLLUX code base](https://github.com/ai-forever/POLLUX)
49
+ - **Paper:** [ArXiv preprint](https://arxiv.org/pdf/2505.24616)
50
+
51
+ ## Uses
52
+
53
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
54
+
55
+ ### Direct Use
56
+
57
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
58
+
59
+ pollux-judge-7b-r is specifically designed for assessing text responses against a single, predefined criterion per evaluation run.
60
+ The model operates optimally when provided with all essential components: the source instruction, the response to be evaluated (typically generated by another LLM), the specific evaluation criterion, and its corresponding scoring rubrics.
61
+
62
+
63
+ ### Out-of-Scope Use
64
+
65
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
66
+
67
+ While the model may **technically** process multiple criteria simultaneously, such usage falls outside its intended design and may yield unpredictable results.
68
+ Similarly, the model is not designed to function autonomously in determining appropriate evaluation criteria—it requires explicit criterion specification to perform reliable assessments.
69
+
70
+ For optimal performance and reliable results, users should structure each evaluation session around one criterion at a time, providing all necessary contextual components to enable the model's comprehensive scoring and rationale generation capabilities.
71
+
72
+
73
+ ## MODEL OUTPUT DISCLAIMER AND LIMITATION OF LIABILITY
74
+
75
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
76
+
77
+ All content, responses, and outputs generated by pollux-judge-7b-r (the "Model") are produced through automated computational processes based on statistical patterns learned from pre-training data.
78
+ Such outputs do not constitute statements, opinions, recommendations, or positions of the model developers, publishers, or affiliated entities (collectively, the "Developers").
79
+
80
+ The Model's outputs do not represent, reflect, or endorse any views, beliefs, policies, or positions held by the Developers.
81
+ Generated content should not be interpreted as official statements, advice, or guidance from the Developers.
82
+
83
+ While the Developers employed appropriate data curation practices during fine-tuning and avoided the intentional inclusion of inappropriate content, the Model's responses may reflect patterns present in the underlying pre-training datasets, which were sourced from publicly available internet content and other large-scale text corpora.
84
+
85
+ The Developers expressly disclaim responsibility for any content generated by the Model. Users acknowledge that:
86
+ - Generated outputs are probabilistic and may contain inaccuracies, biases, or inappropriate content
87
+ - The Developers cannot guarantee the accuracy, completeness, or appropriateness of any Model output
88
+ - Users assume full responsibility for evaluating and using Model-generated content
89
+
90
+ Users are solely responsible for reviewing, validating, and determining the appropriateness of any Model-generated content before use or distribution.
91
+
92
+
93
+ ## How to Get Started with the Model
94
+
95
+ Use the code below to get started with the model.
96
+
97
+ ```python
98
+ import torch
99
+ from transformers import AutoTokenizer, AutoModelForCausalLM
100
+
101
+ torch.manual_seed(42)
102
+
103
+ PROMPT_TEMPLATE = '''instruction: |
104
+ ### Задание для оценки:
105
+ {instruction}
106
+
107
+ reference_answer: |
108
+ ### Эталонный ответ:
109
+ {reference_answer}
110
+
111
+ response: |
112
+ ### Ответ для оценки:
113
+ {answer}
114
+
115
+ score_name: |
116
+ ### Критерий оценки:
117
+ {criteria_name}
118
+
119
+ score_rubrics: |
120
+ ### Шкала оценивания по критерию:
121
+ {criteria_rubrics}
122
+ '''
123
+
124
+ instruction = 'Сколько будет 2+2?'
125
+ reference_answer = ''
126
+ answer = 'Будет 4'
127
+ criteria_name = 'Правильность ответа'
128
+ criteria_rubrics = '''0: Дан неправильный ответ или ответ отсутствует.
129
+
130
+ 1: Ответ модели неполный (не на все вопросы задания получен ответ, в формулировке ответа отсутствует часть информации).
131
+
132
+ 2: Ответ модели совпадает с эталонным или эквивалентен ему.'''
133
+
134
+ prompt = PROMPT_TEMPLATE.format(instruction=instruction,
135
+ reference_answer=reference_answer,
136
+ answer=answer,
137
+ criteria_name=criteria_name,
138
+ criteria_rubrics=criteria_rubrics)
139
+
140
+ MODEL_PATH = "ai-forever/pollux-judge-7b-r"
141
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
142
+ model = AutoModelForCausalLM.from_pretrained(
143
+ MODEL_PATH,
144
+ torch_dtype="auto",
145
+ device_map="auto"
146
+ )
147
+
148
+ messages = [
149
+ {"role": "user", "content": prompt}
150
+ ]
151
+ text = tokenizer.apply_chat_template(
152
+ messages,
153
+ tokenize=False,
154
+ add_generation_prompt=True
155
+ )
156
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
157
+
158
+ generated_ids = model.generate(
159
+ **model_inputs,
160
+ max_new_tokens=4096
161
+ )
162
+ generated_ids = [
163
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
164
+ ]
165
+
166
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
167
+
168
+ print(response)
169
+ ```
170
+
171
+ ## Training Details
172
+
173
+ ### Training Data
174
+
175
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
176
+
177
+ Given the substantial time investment required for manual dataset creation—approximately 24,447 hours for the [POLLUX dataset](https://huggingface.co/datasets/ai-forever/POLLUX)—we opted to employ synthetic data for training purposes, as acquiring a manually composed training set of comparable size was not feasible.
178
+
179
+ Our synthetic data generation process proceeded in several stages.
180
+ Initially, we generated 78,000 instructions using three state-of-the-art language models: [DeepSeekV3](https://huggingface.co/deepseek-ai/DeepSeek-V3), [OpenAI GPT-4o](https://openai.com/index/hello-gpt-4o/), and [o3-mini](https://openai.com/index/openai-o3-mini/), with each model contributing equally to the instruction pool.
181
+ These instructions were based on the POLLUX tasks taxonomy and complexity levels to ensure consistency with the original framework.
182
+ Training data does not include Recommendations, Applied Brainstorming, Literary Text Generation, Questions Generation, Style
183
+ Transfer, Code Modification, and AI as a Character tasks alongside corresponding Task-specific criteria to enable out-of-domain evaluation of the resulting LM-as-a-Judge model.
184
+ To maintain data quality, we implemented filtering procedure that removed instructions containing more than 5% non-Russian tokens as well as duplicate entries, ultimately yielding a refined set of 26,000 high-quality instructions.
185
+
186
+ Subsequently, we mapped these synthetic instructions to their corresponding evaluation criteria sets using the same algorithm employed in the original [POLLUX dataset](https://huggingface.co/datasets/ai-forever/POLLUX).
187
+ Each criteria set comprised Critical, General, Subjective, and relevant Domain- and Task-specific criteria (for detailed methodology, see Section 2.3 in the [preprint](https://arxiv.org/pdf/2505.24616)).
188
+ To generate diverse responses, we employed 15 open-source language models from various families, including Llama, Phi, Qwen, Mistral, and Gemma, with each model contributing equally to the answer generation process, for the complete listing of the models see Appendix M.2 in the [preprint](https://arxiv.org/pdf/2505.24616)).
189
+
190
+ For criteria annotation, we utilized [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1), which generated numerical scores based on established criterion rubrics along with corresponding rationales for each evaluation.
191
+ This systematic approach resulted in 8,000,000 samples, each containing the complete tuple of (instruction, answer, criterion, score, rationale).
192
+ From this dataset, we performed stratified random sampling across tasks to obtain our final training set of 1,000,000 samples, ensuring balanced representation across different task categories.
193
+
194
+
195
+ ### Training Procedure
196
+
197
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
198
+
199
+
200
+ The input for the LM-as-a-Judge model includes source instruction, LLM's answer, name of criterion, its rubrics and reference answer if present.
201
+ Separate regression head predicts the numerical score and language modeling head generates textual comment.
202
+ The total loss is a sum of MSE and CE objectives.
203
+
204
+ #### Training Hyperparameters
205
+
206
+ - **Training regime:** bf16 mixed precision; <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
207
+ - **Epochs:** 3;
208
+ - **Optimizer:** AdamW;
209
+ - **Learning rate:** from 1e-05 to 0 with linear scheduler;
210
+ - **Batch size:** 256;
211
+ -
212
+
213
+ ## Evaluation
214
+
215
+ <!-- This section describes the evaluation protocols and provides the results. -->
216
+
217
+ ### Testing Data, Factors & Metrics
218
+
219
+ #### Testing Data
220
+
221
+ <!-- This should link to a Dataset Card if possible. -->
222
+
223
+ For testing data we employed the [POLLUX dataset](https://huggingface.co/datasets/ai-forever/POLLUX).
224
+ Note this provides both in- and out-of-domain evaluation as some of the tasks and criteria are absent in training data.
225
+
226
+ #### Metrics
227
+
228
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
229
+
230
+ We employed **Spearman’s rank correlation** with expert judgements and **Mean Absolute Error (MAE)** metrics alongside the Verdict Confidence to assess the performance of pollux-judge-7b and compare it with those of the reference models.
231
+
232
+ MAE offers a high degree of interpretability, as it is measured on the same scale as the annotation – specifically, in points.
233
+ On the other hand, Spearman’s rank correlation allows to quantify the degree of monotonic association between the two rankings of models outputs and
234
+ to demonstrate how consistently the LLM-as-Judge reproduces the relative ordering of output quality as established by human experts.
235
+
236
+ The Verdict Confidence is computed as the maximum empirical probability among the assigned scores.
237
+ We adopted Verdict Confidence as a measure of the annotator agreement instead of Krippendorff’s alpha and the Dawid-Skene algorithm due to their relatively complex interpretability.
238
+
239
+
240
+ ### Results
241
+
242
+ For the reference models we took [OpenAI GPT-4o](https://openai.com/index/hello-gpt-4o/), [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1) and [M-Prometheus-14B](https://huggingface.co/Unbabel/M-Prometheus-14B).
243
+ We report aggregate results averaged over the evaluated models (LM-as-a-Judge predicts the scores that have been assigned to the answers of particular LLM) on the out-of-domain part of [POLLUX dataset](https://huggingface.co/datasets/ai-forever/POLLUX).
244
+ For detailed evaluation results see Appendix D in the [preprint](https://arxiv.org/pdf/2505.24616).
245
+
246
+ Spearman’s rank correlation:
247
+
248
+ | Model | pollux-judge-7b-r | DeepSeek-R1 | M-Prometheus-14B | GPT-4o (2024-11-20)|
249
+ | --- | --- | --- | --- | --- |
250
+ | Claude 3.5 Sonnet (2024-10-22) | 0.653 | 0.739 | -0.006 | 0.759 |
251
+ | GPT-4o (2024-08-06) | 0.572 | 0.627 | -0.033 | 0.643 |
252
+ | GigaChat-Max (1.0.26.20) | 0.582 | 0.640 | 0.027 | 0.649 |
253
+ | Llama-3.1-405B | 0.587 | 0.591 | 0.022 | 0.639 |
254
+ | T-pro-it-1.0 | 0.543 | 0.573 | -0.044 | 0.616 |
255
+ | YaGPT-4-Pro (2024-10-23) | 0.599 | 0.635 | 0.099 | 0.671 |
256
+ |o1 (2024-12-17) | 0.674 | 0.748 | -0.022 | 0.771 |
257
+ | Avg. | 0.602 | 0.647 | 0.019 | 0.674 |
258
+
259
+ MAE (MAE values are given in parenthesis):
260
+
261
+ | Model | pollux-judge-7b-r | DeepSeek-R1 | M-Prometheus-14B | GPT-4o (2024-11-20)|
262
+ | --- | --- | --- | --- | --- |
263
+ | Claude 3.5 Sonnet (2024-10-22) | 0.519 | 0.245 | 2.697 | 0.236 |
264
+ | GPT-4o (2024-08-06) | 0.489 | 0.349 | 2.676 | 0.339 |
265
+ | GigaChat-Max (1.0.26.20) | 0.478 | 0.350 | 2.468 | 0.342 |
266
+ | Llama-3.1-405B | 0.513 | 0.448 | 1.912 | 0.405 |
267
+ | T-pro-it-1.0 | 0.503 | 0.475 | 2.978 | 0.425 |
268
+ | YaGPT-4-Pro (2024-10-23) | 0.495 | 0.387 | 1.793 | 0.369 |
269
+ |o1 (2024-12-17) | 0.460 | 0.244 | 2.873 | 0.229 |
270
+ | Avg. | 0.494 | 0.356 | 2.487 | 0.335 |
271
+
272
+ Verdict Confidence (calculated on the whole test sample):
273
+
274
+ | Model | pollux-judge-7b-r | DeepSeek-R1 | M-Prometheus-14B | GPT-4o (2024-11-20)|
275
+ | --- | --- | --- | --- | --- |
276
+ | Claude 3.5 Sonnet (2024-10-22) | 0.795 | 0.879 | 0.645 | 0.877 |
277
+ | GPT-4o (2024-08-06) | 0.820 | 0.877 | 0.702 | 0.877 |
278
+ | GigaChat-Max (1.0.26.20) | 0.824 | 0.878 | 0.715 | 0.879 |
279
+ | Llama-3.1-405B | 0.777 | 0.836 | 0.684 | 0.837 |
280
+ | T-pro-it-1.0 | 0.787 | 0.838 | 0.644 | 0.842 |
281
+ | YaGPT-4-Pro (2024-10-23) | 0.814 | 0.866 | 0.738 | 0.867 |
282
+ |o1 (2024-12-17) | 0.814 | 0.885 | 0.643 | 0.882 |
283
+ | Avg. | 0.806 | 0.866 | 0.684 | 0.867 |
284
+
285
+
286
+ ## Technical Specifications
287
+
288
+ ### Compute Infrastructure
289
+
290
+ #### Hardware
291
+
292
+ 64 Nvidia A100 80Gb GPUs.
293
+
294
+ #### Software
295
+
296
+ The model was trained with [FSDP](https://huggingface.co/docs/peft/accelerate/fsdp).
297
+
298
+ ## Citation
299
+
300
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
301
+
302
+ **BibTeX:**
303
+
304
+ ```
305
+ @misc{martynov2025eyejudgementdissectingevaluation,
306
+ title={Eye of Judgement: Dissecting the Evaluation of Russian-speaking LLMs with POLLUX},
307
+ author={Nikita Martynov and Anastasia Mordasheva and Dmitriy Gorbetskiy and Danil Astafurov and Ulyana Isaeva and Elina Basyrova and Sergey Skachkov and Victoria Berestova and Nikolay Ivanov and Valeriia Zanina and Alena Fenogenova},
308
+ year={2025},
309
+ eprint={2505.24616},
310
+ archivePrefix={arXiv},
311
+ primaryClass={cs.CL},
312
+ url={https://arxiv.org/abs/2505.24616},
313
+ }
314
+ ```
images/logo_pollux_horiz_short_WHITEBG.png ADDED