Taka008 commited on
Commit
ff64e01
·
verified ·
1 Parent(s): 51f1a30

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -6
README.md CHANGED
@@ -146,12 +146,17 @@ We evaluated the models using 100 examples from the dev split. Note that we skip
146
 
147
  | Model name | average | EL | FA | HE | MC | MR | MT | NLI | QA | RC | SUM |
148
  | :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
149
- | [llm-jp/llm-jp-3-8x1.8b](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b) | 0.4538 | 0.3868 | 0.2412 | 0.265 | 0.5300 | 0.5100 | 0.8097 | 0.4760 | 0.5373 | 0.7553 | 0.0262 |
150
- | [llm-jp/llm-jp-3-8x1.8b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct2) | 0.5130 | 0.4475 | 0.2296 | 0.4050 | 0.6433 | 0.5600 | 0.8150 | 0.5660 | 0.5611 | 0.8366 | 0.0660 |
151
- | [llm-jp/llm-jp-3-8x1.8b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct3) | 0.5145 | 0.4515 | 0.2265 | 0.4250 | 0.6833 | 0.5400 | 0.8212 | 0.5580 | 0.5453 | 0.8193 | 0.0753 |
152
- | [llm-jp/llm-jp-3-8x13b](https://huggingface.co/llm-jp/llm-jp-3-8x13b) | 0.5867 | 0.5445 | 0.2913 | 0.4950 | 0.8033 | 0.7200 | 0.8378 | 0.5780 | 0.6461 | 0.8543 | 0.0966 |
153
- | [llm-jp/llm-jp-3-8x13b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x13b-instruct2) | 0.6256 | 0.5521 | 0.2889 | 0.5250 | 0.8967 | 0.7500 | 0.8362 | 0.6820 | 0.6367 | 0.9065 | 0.1816 |
154
- | [llm-jp/llm-jp-3-8x13b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-8x13b-instruct3) | 0.6246 | 0.5477 | 0.2852 | 0.5250 | 0.9067 | 0.7600 | 0.8386 | 0.6880 | 0.6267 | 0.9041 | 0.1635 |
 
 
 
 
 
155
 
156
  ### Japanese MT Bench
157
 
@@ -161,7 +166,9 @@ For more details, please refer to the [codes](https://github.com/llm-jp/llm-jp-j
161
 
162
  | Model name | average | coding | extraction | humanities | math | reasoning | roleplay | stem | writing |
163
  | :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
 
164
  | [llm-jp/llm-jp-3-172b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3) | 6.36 | 4.24 | 6.66 | 8.11 | 4.58 | 5.74 | 7.44 | 6.76 | 7.36 |
 
165
  | [llm-jp/llm-jp-3-8x1.8b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct2) | 5.47 | 3.47 | 4.90 | 7.78 | 3.51 | 4.38 | 6.84 | 6.35 | 6.54 |
166
  | [llm-jp/llm-jp-3-8x1.8b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct3) | 5.52 | 3.60 | 5.23 | 7.81 | 3.87 | 4.53 | 6.40 | 5.98 | 6.72 |
167
  | [llm-jp/llm-jp-3-8x13b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x13b-instruct2) | 6.62 | 4.50 | 6.53 | 8.56 | 5.30 | 6.03 | 7.86 | 7.10 | 7.12 |
@@ -177,7 +184,9 @@ The scores represent the average values obtained from five rounds of inference a
177
 
178
  | Model name | Acceptance rate (%, ↑) | Violation rate (%, ↓) |
179
  | :--- | ---: | ---: |
 
180
  | [llm-jp/llm-jp-3-172b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3) | 95.48 | 1.67 |
 
181
  | [llm-jp/llm-jp-3-8x1.8b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct2) | 86.13 | 7.56 |
182
  | [llm-jp/llm-jp-3-8x1.8b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct3) | 92.20 | 2.20 |
183
  | [llm-jp/llm-jp-3-8x13b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x13b-instruct2) | 88.63 | 6.01 |
 
146
 
147
  | Model name | average | EL | FA | HE | MC | MR | MT | NLI | QA | RC | SUM |
148
  | :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
149
+ | [llm-jp/llm-jp-3-7.2b](https://huggingface.co/llm-jp/llm-jp-3-7.2b) | 0.455 | 0.400 | 0.266 | 0.350 | 0.547 | 0.430 | 0.809 | 0.362 | 0.545 | 0.814 | 0.028 |
150
+ | [llm-jp/llm-jp-3-7.2b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-7.2b-instruct3) | 0.514 | 0.447 | 0.245 | 0.435 | 0.693 | 0.510 | 0.826 | 0.588 | 0.497 | 0.838 | 0.059 |
151
+ | [llm-jp/llm-jp-3-172b](https://huggingface.co/llm-jp/llm-jp-3-172b) | 0.543 | 0.408 | 0.266 | 0.515 | 0.763 | 0.670 | 0.823 | 0.574 | 0.569 | 0.829 | 0.015 |
152
+ | [llm-jp/llm-jp-3-172b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3) | 0.613 | 0.517 | 0.271 | 0.570 | 0.873 | 0.730 | 0.844 | 0.728 | 0.601 | 0.883 | 0.112 |
153
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
154
+ | [llm-jp/llm-jp-3-8x1.8b](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b) | 0.454 | 0.387 | 0.241 | 0.265 | 0.530 | 0.510 | 0.810 | 0.476 | 0.537 | 0.755 | 0.026 |
155
+ | [llm-jp/llm-jp-3-8x1.8b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct2) | 0.513 | 0.448 | 0.230 | 0.405 | 0.643 | 0.560 | 0.815 | 0.566 | 0.561 | 0.837 | 0.066 |
156
+ | [llm-jp/llm-jp-3-8x1.8b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct3) | 0.515 | 0.452 | 0.227 | 0.425 | 0.683 | 0.540 | 0.821 | 0.558 | 0.545 | 0.819 | 0.075 |
157
+ | [llm-jp/llm-jp-3-8x13b](https://huggingface.co/llm-jp/llm-jp-3-8x13b) | 0.587 | 0.545 | 0.291 | 0.495 | 0.803 | 0.720 | 0.838 | 0.578 | 0.646 | 0.854 | 0.097 |
158
+ | [llm-jp/llm-jp-3-8x13b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x13b-instruct2) | 0.626 | 0.552 | 0.289 | 0.525 | 0.897 | 0.750 | 0.836 | 0.682 | 0.637 | 0.907 | 0.182 |
159
+ | [llm-jp/llm-jp-3-8x13b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-8x13b-instruct3) | 0.625 | 0.548 | 0.285 | 0.525 | 0.907 | 0.760 | 0.839 | 0.688 | 0.627 | 0.904 | 0.164 |
160
 
161
  ### Japanese MT Bench
162
 
 
166
 
167
  | Model name | average | coding | extraction | humanities | math | reasoning | roleplay | stem | writing |
168
  | :--- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
169
+ | [llm-jp/llm-jp-3-7.2b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-7.2b-instruct3) | 5.79 | 3.46 | 5.94 | 8.15 | 3.95 | 4.46 | 7.51 | 6.23 | 6.66 |
170
  | [llm-jp/llm-jp-3-172b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3) | 6.36 | 4.24 | 6.66 | 8.11 | 4.58 | 5.74 | 7.44 | 6.76 | 7.36 |
171
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
172
  | [llm-jp/llm-jp-3-8x1.8b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct2) | 5.47 | 3.47 | 4.90 | 7.78 | 3.51 | 4.38 | 6.84 | 6.35 | 6.54 |
173
  | [llm-jp/llm-jp-3-8x1.8b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct3) | 5.52 | 3.60 | 5.23 | 7.81 | 3.87 | 4.53 | 6.40 | 5.98 | 6.72 |
174
  | [llm-jp/llm-jp-3-8x13b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x13b-instruct2) | 6.62 | 4.50 | 6.53 | 8.56 | 5.30 | 6.03 | 7.86 | 7.10 | 7.12 |
 
184
 
185
  | Model name | Acceptance rate (%, ↑) | Violation rate (%, ↓) |
186
  | :--- | ---: | ---: |
187
+ | [llm-jp/llm-jp-3-7.2b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-7.2b-instruct3) | 92.86 | 2.44 |
188
  | [llm-jp/llm-jp-3-172b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-172b-instruct3) | 95.48 | 1.67 |
189
+ | --- | --- | --- |
190
  | [llm-jp/llm-jp-3-8x1.8b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct2) | 86.13 | 7.56 |
191
  | [llm-jp/llm-jp-3-8x1.8b-instruct3](https://huggingface.co/llm-jp/llm-jp-3-8x1.8b-instruct3) | 92.20 | 2.20 |
192
  | [llm-jp/llm-jp-3-8x13b-instruct2](https://huggingface.co/llm-jp/llm-jp-3-8x13b-instruct2) | 88.63 | 6.01 |