Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,84 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
-
|
36 |
-
## Uses
|
37 |
-
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
-
### Direct Use
|
41 |
-
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
-
|
52 |
-
### Out-of-Scope Use
|
53 |
-
|
54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
-
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
-
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
-
|
76 |
-
## Training Details
|
77 |
-
|
78 |
-
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
|
101 |
-
|
|
|
102 |
|
103 |
-
|
|
|
104 |
|
105 |
-
|
106 |
|
107 |
-
|
108 |
|
109 |
-
|
|
|
|
|
110 |
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
[
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
|
189 |
-
|
190 |
|
191 |
-
|
|
|
|
|
192 |
|
193 |
-
|
194 |
|
195 |
-
|
196 |
|
197 |
-
|
198 |
|
199 |
-
|
|
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: gemma
|
4 |
+
language:
|
5 |
+
- ja
|
6 |
+
base_model:
|
7 |
+
- google/gemma-3-12b-it
|
8 |
+
pipeline_tag: image-text-to-text
|
9 |
---
|
10 |
|
11 |
+
# AXCXEPT/EZO2.5-gemma-3-12b-it-Preview
|
12 |
|
13 |
+

|
14 |
|
15 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
16 |
|
17 |
+
---
|
18 |
|
19 |
## Model Details
|
20 |
|
21 |
+
昨今登場したLLM自身の力を自力で向上させる「GRPO」や「PPO」の概念を、
|
22 |
+
弊社で開発した「EZO」というトレーニング手法にミックスすることで、
|
23 |
+
3,000件のデータセット、2時間×H200×8台のトレーニングで、Japanese MT Benchおよび、Elyza Tasks100におけるベースモデルの日本語性能を向上させることに成功したモデルです。
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
+
本トレーニング手法は、まだ研究段階にあり手法の自動化や、アブレーションが必要なステータスではあるものの、複雑かつ非常に時間がかかるGRPO/PPOといった強化学習方法を、
|
26 |
+
低予算でも実現できる大体の手段となりえると考えています。
|
27 |
|
28 |
+
By integrating the recently introduced concepts of “GRPO” and “PPO” — which enable LLMs to autonomously improve their own capabilities — into our proprietary training method “EZO,” we successfully enhanced the Japanese performance of the base model on both Japanese MT Bench and Elyza Tasks100. This was achieved using only 3,000 training samples and two hours of training on 8 H200 GPUs.
|
29 |
+
While this training method is still in the research phase and requires further automation and ablation studies, we believe it represents a viable alternative to complex and time-consuming reinforcement learning approaches like GRPO/PPO — making it achievable even on a limited budget.
|
30 |
|
31 |
+
## Bench Mark
|
32 |
|
33 |
+

|
34 |
|
35 |
+
もともと非常に高い日本語性能を示していた、google/gemma-3-12b-itから、短時間のトレーニングで性能向上を達成。
|
36 |
+
32B, 72Bのモデルにも一部肉薄し、ベースモデルの性能向上に伴い特化型の性能向上が実現できている。
|
37 |
+
※ただし、ベンチマークそのものの多様性が今後必要となるため、今後は、選択肢の多い英語での実施も行い、トレーニング成果の実用性実証研究を行う予定です。
|
38 |
|
39 |
+
---
|
40 |
+
## How to use
|
41 |
+
Runs on a single A40 GPU.
|
42 |
+
|
43 |
+
```bash
|
44 |
+
vllm serve AXCXEPT/EZO2.5-gemma-3-12b-it-Preview --max-model-len 32768 --enforce-eager
|
45 |
+
```
|
46 |
+
|
47 |
+
```python
|
48 |
+
from openai import OpenAI
|
49 |
+
client = OpenAI(
|
50 |
+
base_url="http://localhost:8000/v1",
|
51 |
+
api_key="token-abc123",
|
52 |
+
)
|
53 |
+
|
54 |
+
prompt = """Every morning Aya goes for a $9$-kilometer-long walk and stops at a coffee shop afterwards. When she walks at a constant speed of $s$ kilometers per hour, the walk takes her 4 hours, including $t$ minutes spent in the coffee shop. When she walks $s+2$ kilometers per hour, the walk takes her 2 hours and 24 minutes, including $t$ minutes spent in the coffee shop. Suppose Aya walks at $s+rac{1}{2}$ kilometers per hour. Find the number of minutes the walk takes her, including the $t$ minutes spent in the coffee shop."""
|
55 |
+
completion = client.chat.completions.create(
|
56 |
+
model="AXCXEPT/EZO2.5-gemma-3-12b-it-Preview",
|
57 |
+
messages=[
|
58 |
+
{"role": "user", "content": prompt}
|
59 |
+
],
|
60 |
+
temperature=0.0,
|
61 |
+
top_p=1.0,
|
62 |
+
max_tokens: 20480
|
63 |
+
)
|
64 |
+
|
65 |
+
print(completion.choices[0].message)
|
66 |
+
```
|
67 |
+
|
68 |
+
<b>ベンチマークスコアは、temperature: 0.0、top_p: 1.0、"max_tokens": 20480で推論した結果に基づきます。Cons@64などのばらつきによる評価は未実施です。</b>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
|
70 |
+
---
|
71 |
|
72 |
+
## License
|
73 |
+
このモデルは研究のために開発されています。利用に際する一切の損害に関して、当社ならびに開発者は一切持たないことを理解の上ご活用ください。
|
74 |
+
Of course. Here's the English translation:
|
75 |
|
76 |
+
---
|
77 |
|
78 |
+
This model has been developed for research purposes. Please use it with the understanding that our company and the developers accept no responsibility for any damages resulting from its use.
|
79 |
|
80 |
+
---
|
81 |
|
82 |
+
## Special Thanks
|
83 |
+
本モデルのベースモデルの開発を行った、Google社ならびに同社の開発チームに、尊敬と敬意の念をここに表します。
|
84 |
+
We would like to express our sincere respect and appreciation to Google and its development team for creating the base model upon which this model is built.
|