kbu1564 commited on
Commit
0c229c4
·
verified ·
1 Parent(s): f19bd2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,7 +9,7 @@ library_name: transformers
9
 
10
  ## Overview
11
 
12
- HyperCLOVA X SEED Think 14B is a next-generation language model that moves beyond the conventional approach of simply increasing model size to improve performance. It combines [HyperCLOVA X’s lightweighting technology](https://clova.ai/tech-blog/%EC%9E%91%EC%A7%80%EB%A7%8C-%EA%B0%95%EB%A0%A5%ED%95%98%EA%B2%8C-%EA%B3%A0%ED%9A%A8%EC%9C%A8-llm%EC%9D%84-%EB%A7%8C%EB%93%9C%EB%8A%94-hyperclova-x%EC%9D%98-%EA%B2%BD%EB%9F%89%ED%99%94-%EA%B8%B0) for building high-efficiency LLMs with advanced reasoning capabilities. Its development relied on two key technologies: (1) Pruning & Knowledge Distillation, which achieves both compactness and high performance, and (2) a Reinforcement Learning (RL) pipeline, which maximizes reasoning ability. By pruning low-importance parameters and distilling knowledge from a large model into a smaller one, training costs have been significantly reduced. On top of this, [the latest RL recipe validated in HyperCLOVA X Think](https://arxiv.org/pdf/2506.22403) is applied in a multi-stage process: (1) Supervised Fine-Tuning (SFT), (2) Reinforcement Learning with Verifiable Rewards (RLVR), (3) Length Controllability (LC) for reasoning path optimization, and (4) a joint training of Reinforcement Learning from Human Feedback (RLHF) and RLVR.
13
 
14
  It is a considerable challenge to equip a pruned, knowledge-distilled model with reasoning capabilities, since reductions in training costs and model size often degrade reasoning performance. However, through extensive research experience and persistent trial and error, the HyperCLOVA X team has succeeded in lowering training costs while maintaining reasoning performance comparable to that of larger, resource-intensive models.
15
 
 
9
 
10
  ## Overview
11
 
12
+ HyperCLOVA X SEED Think 14B is a next-generation language model that moves beyond the conventional approach of simply increasing model size to improve performance. It combines [HyperCLOVA X’s lightweighting technology](https://tinyurl.com/y3hrfz67) for building high-efficiency LLMs with advanced reasoning capabilities. Its development relied on two key technologies: (1) Pruning & Knowledge Distillation, which achieves both compactness and high performance, and (2) a Reinforcement Learning (RL) pipeline, which maximizes reasoning ability. By pruning low-importance parameters and distilling knowledge from a large model into a smaller one, training costs have been significantly reduced. On top of this, [the latest RL recipe validated in HyperCLOVA X Think](https://arxiv.org/pdf/2506.22403) is applied in a multi-stage process: (1) Supervised Fine-Tuning (SFT), (2) Reinforcement Learning with Verifiable Rewards (RLVR), (3) Length Controllability (LC) for reasoning path optimization, and (4) a joint training of Reinforcement Learning from Human Feedback (RLHF) and RLVR.
13
 
14
  It is a considerable challenge to equip a pruned, knowledge-distilled model with reasoning capabilities, since reductions in training costs and model size often degrade reasoning performance. However, through extensive research experience and persistent trial and error, the HyperCLOVA X team has succeeded in lowering training costs while maintaining reasoning performance comparable to that of larger, resource-intensive models.
15