Ring-lite-2506 / README.md
LiangJiang's picture
Update README.md
f25035f verified
---
license: mit
language:
- zh
- en
base_model:
- inclusionAI/Ling-lite-base-1.5
---
# Ring-lite-2506
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
<p>
<p align="center">
πŸ€— <a href="https://huggingface.co/inclusionAI">Hugging Face</a>
<p>
## Introduction
Ring-lite-2506 is a lightweight, fully open-sourced MoE (Mixture of Experts) LLM designed for complex reasoning tasks. It is built upon the publicly available [Ling-lite-1.5](https://huggingface.co/inclusionAI/Ling-lite-1.5) model, which has 16.8B parameters with 2.75B activated parameters. We use a joint training pipeline combining knowledge distillation with reinforcement learning, achieving performance comparable to state-of-the-art (SOTA) small-size reasoning models on challenging benchmarks (AIME, LiveCodeBench, and GPQA-Diamond) while activating only one-third of their parameters.
## Model Downloads
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
| Ring-lite-2506 | 16.8B | 2.75B | 128K | [πŸ€— HuggingFace](https://huggingface.co/inclusionAI/Ring-lite-2506) |
</div>
## Evaluation
For a comprehensive evaluation of the quality of our reasoning models, we implemented automatic benchmarks to assess their performance including math, code and science.
<p align="center">
<img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*iAXESaxrbDcAAAAATtAAAAgAemJ7AQ/original" width="1000"/>
<p>
More details are reported in our [technical report](https://arxiv.org/abs/2506.14731).
## Quickstart
### πŸ€— Hugging Face Transformers
Here is a code snippet to show you how to use the chat model with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "inclusionAI/Ring-lite-2506"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language models."
messages = [
{"role": "system", "content": "You are Ring, an assistant created by inclusionAI"},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Dataset
The training data of Ring-lite-2506 is release at [Ring-lite-sft-data](https://huggingface.co/datasets/inclusionAI/Ring-lite-sft-data) and [Ring-lite-rl-data](https://huggingface.co/datasets/inclusionAI/Ring-lite-rl-data).
## Deployment
Please refer to [GitHub](https://github.com/inclusionAI/Ring/blob/main/README.md)
## License
This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ring-lite-2506/blob/main/LICENSE).
## Citation
```
@misc{ringteam2025ringlitescalablereasoningc3postabilized,
title={Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs},
author={Ling Team},
year={2025},
eprint={2506.14731},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.14731},
}
```