Text Generation
Transformers
Safetensors
qwen2
conversational
text-generation-inference

Introduction

E1-Math-1.5B is a language model fine-tuned from DeepSeek-R1-Distilled-Qwen-1.5B. It is trained for Elastic Reasoning by budget-constrained rollout strategy, integrated into GRPO, which teaches the model to reason adaptively when the thinking process is cut short and generalizes effectively to unseen budget constraints without additional training.

Performance (Avg@16)

Model Tokens Acc (%) Tokens Acc (%) Tokens Acc (%) Tokens Acc (%) Tokens Acc (%)
DeepScaleR-1.5B 10050 41.0 1488 5.2 1904 9.6 2809 15.8 3700 22.7
E1-Math-1.5B 6825 35.0 1340 13.5 1799 17.5 2650 24.8 3377 27.9

Usage

For detailed usage, please refer to repo.

Citation

@article{xu2025scalable,
  title={Scalable Chain of Thoughts via Elastic Reasoning},
  author={Xu, Yuhui and Dong, Hanze and Wang, Lei and Sahoo, Doyen and Li, Junnan and Xiong, Caiming},
  journal={arXiv preprint arXiv:2505.05315},
  year={2025}
}

Ethical Considerations

This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people's lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.

Downloads last month
508
Safetensors
Model size
1.78B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Salesforce/E1-Math-1.5B

Finetuned
(332)
this model
Quantizations
2 models

Dataset used to train Salesforce/E1-Math-1.5B

Collection including Salesforce/E1-Math-1.5B