Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,8 @@ license_link: LICENSE
|
|
5 |
library_name: transformers
|
6 |
---
|
7 |
|
|
|
|
|
8 |
## Overview
|
9 |
|
10 |
HyperCLOVA X SEED Think 14B is a next-generation language model that moves beyond the conventional approach of simply increasing model size to improve performance. It combines [HyperCLOVA X’s lightweighting technology](https://clova.ai/tech-blog/%EC%9E%91%EC%A7%80%EB%A7%8C-%EA%B0%95%EB%A0%A5%ED%95%98%EA%B2%8C-%EA%B3%A0%ED%9A%A8%EC%9C%A8-llm%EC%9D%84-%EB%A7%8C%EB%93%9C%EB%8A%94-hyperclova-x%EC%9D%98-%EA%B2%BD%EB%9F%89%ED%99%94-%EA%B8%B0) for building high-efficiency LLMs with advanced reasoning capabilities. Its development relied on two key technologies: (1) Pruning & Knowledge Distillation, which achieves both compactness and high performance, and (2) a Reinforcement Learning (RL) pipeline, which maximizes reasoning ability. By pruning low-importance parameters and distilling knowledge from a large model into a smaller one, training costs have been significantly reduced. On top of this, [the latest RL recipe validated in HyperCLOVA X Think](https://arxiv.org/pdf/2506.22403) is applied in a multi-stage process: (1) Supervised Fine-Tuning (SFT), (2) Reinforcement Learning with Verifiable Rewards (RLVR), (3) Length Controllability (LC) for reasoning path optimization, and (4) a joint training of Reinforcement Learning from Human Feedback (RLHF) and RLVR.
|
|
|
5 |
library_name: transformers
|
6 |
---
|
7 |
|
8 |
+

|
9 |
+
|
10 |
## Overview
|
11 |
|
12 |
HyperCLOVA X SEED Think 14B is a next-generation language model that moves beyond the conventional approach of simply increasing model size to improve performance. It combines [HyperCLOVA X’s lightweighting technology](https://clova.ai/tech-blog/%EC%9E%91%EC%A7%80%EB%A7%8C-%EA%B0%95%EB%A0%A5%ED%95%98%EA%B2%8C-%EA%B3%A0%ED%9A%A8%EC%9C%A8-llm%EC%9D%84-%EB%A7%8C%EB%93%9C%EB%8A%94-hyperclova-x%EC%9D%98-%EA%B2%BD%EB%9F%89%ED%99%94-%EA%B8%B0) for building high-efficiency LLMs with advanced reasoning capabilities. Its development relied on two key technologies: (1) Pruning & Knowledge Distillation, which achieves both compactness and high performance, and (2) a Reinforcement Learning (RL) pipeline, which maximizes reasoning ability. By pruning low-importance parameters and distilling knowledge from a large model into a smaller one, training costs have been significantly reduced. On top of this, [the latest RL recipe validated in HyperCLOVA X Think](https://arxiv.org/pdf/2506.22403) is applied in a multi-stage process: (1) Supervised Fine-Tuning (SFT), (2) Reinforcement Learning with Verifiable Rewards (RLVR), (3) Length Controllability (LC) for reasoning path optimization, and (4) a joint training of Reinforcement Learning from Human Feedback (RLHF) and RLVR.
|