|
--- |
|
license: other |
|
license_name: hyperclovax-seed |
|
license_link: LICENSE |
|
--- |
|
|
|
|
|
 |
|
|
|
|
|
## Overview |
|
|
|
HyperCLOVAX-SEED-Text-Instruct-0.5B is a Text-to-Text model with instruction-following capabilities that excels in understanding Korean language and culture. Compared to external competitors of similar scale, it demonstrates improved mathematical performance and a substantial enhancement in Korean language capability. The HyperCLOVAX-SEED-Text-Instruct-0.5B is currently the smallest model released by the HyperCLOVAX, representing a lightweight solution suitable for deployment in resource‑constrained environments such as edge devices. It supports a maximum context length of 4K and functions as a versatile small model applicable to a wide range of tasks. The total cost of a single training run for HyperCLOVAX-SEED-Text-Instruct-0.5B was 4.358K A100 GPU hours (approximately USD 6.537K), which is 39 times lower than the cost of training the `QWEN2.5‑0.5B‑instruct` model. |
|
|
|
|
|
## Basic Information |
|
|
|
- **Architecture**: Transformer‑based (Dense Model) |
|
- **Parameters**: 0.57 B (total); 0.45 B (excluding token embeddings, tied embeddings) |
|
- **Input/Output Format**: Text / Text |
|
- **Maximum Context Length**: 4 K tokens |
|
- **Knowledge Cutoff Date**: Trained on data up to January 2025 |
|
|
|
|
|
## Training and Data |
|
|
|
The training dataset for HyperCLOVAX-SEED-Text-Instruct-0.5B consists of diverse sources, including the high‑quality data accumulated during the development of HyperCLOVAX-SEED-Text-Instruct-0.5B. Training was conducted in three main stages: |
|
1. **Pretraining**: Knowledge acquisition using high‑quality data and a high‑performance pretrained model. |
|
2. **Rejection Sampling Fine‑Tuning (RFT)**: Enhancement of multi‑domain knowledge and complex reasoning capabilities. |
|
3. **Supervised Fine‑Tuning (SFT)**: Improvement of instruction‑following proficiency. |
|
|
|
|
|
## Training Cost |
|
|
|
HyperCLOVAX-SEED-Text-Instruct-0.5B leveraged HyperCLOVA X’s lightweight training process and high‑quality data to achieve significantly lower training costs compared to industry‑leading competitors of similar scale. Excluding the SFT stage, a single pretraining run incurred: |
|
|
|
| Pretraining Cost Category | HyperCLOVAX-SEED-Text-Instruct-0.5B | QWEN2.5‑0.5B‑instruct | |
|
|---------------------------------|-----------------------------------------------|-------------------------------------| |
|
| **A100 GPU Hours** | 4.358 K | 169.257 K | |
|
| **Cost (USD)** | 6.537 K | 253.886 K | |
|
|
|
This represents approximately a 39× reduction in pretraining cost relative to `QWEN2.5‑0.5B-instruct`. |
|
|
|
## Benchmarks |
|
|
|
| **Model** | **KMMLU (5-shot, acc)** | **HAE-RAE (5-shot, acc)** | **CLiCK (5-shot, acc)** | **KoBEST (5-shot, acc)** | |
|
| --- | --- | --- | --- | --- | |
|
| HyperCLOVAX-SEED-Text-Base-0.5B | 0.4181 | 0.6370 | 0.5373 | 0.6963 |
|
| HyperCLOVAX-SEED-Text-Instruct-0.5B | 0.3815 | 0.5619 | 0.4446 | 0.6299 | |
|
| QWEN2.5-0.5B-instruct | 0.2968 | 0.3428 | 0.3805 | 0.5025 | |
|
|
|
## HuggingFace Usage Example |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
model = AutoModelForCausalLM.from_pretrained("naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-0.5B") |
|
tokenizer = AutoTokenizer.from_pretrained("naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-0.5B") |
|
|
|
chat = [ |
|
{"role": "tool_list", "content": ""}, |
|
{"role": "system", "content": "- AI 언어모델의 이름은 \"CLOVA X\" 이며 네이버에서 만들었다.\n- 오늘은 2025년 04월 24일(목)이다."}, |
|
{"role": "user", "content": "슈뢰딩거 방정식과 양자역학의 관계를 최대한 자세히 알려줘."}, |
|
] |
|
|
|
inputs = tokenizer.apply_chat_template(chat, add_generation_prompt=True, return_dict=True, return_tensors="pt") |
|
output_ids = model.generate(**inputs, max_length=1024, stop_strings=["<|endofturn|>", "<|stop|>"], tokenizer=tokenizer) |
|
print(tokenizer.batch_decode(output_ids)) |
|
``` |
|
|
|
|