Introduction to TraDo

Paper | Code

We introduce TraDo, SOTA diffusion language model, trained with TraceRL.

  • TraDo-4B-Instruct and TraDo-8B-Instruct outperform similarly sized strong AR models across math reasoning tasks.
  • TraDo-8B-Thinking is the first Long-CoT diffusion language model.

Citation

@article{wang2025trado,
  title={Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models},
  author={Wang, Yinjie and Yang, Ling and Li, Bowen and Tian, Ye and Shen, Ke and Wang, Mengdi},
  journal={arXiv preprint arXiv:2509.06949},
  year={2025}
}
Downloads last month
3
Safetensors
Model size
8.19B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Gen-Verse/TraDo-8B-Thinking