Qwen2.5-Math-7B-RoPE-300k

This model is a variant of Qwen/Qwen2.5-Math-7B, whose RoPE base frequency was increased to 300k in order to extend the model's context from 4k to 32k tokens.

Citation

If you find this model useful in your work, please cite the original source:

@article{yang2024qwen25mathtechnicalreportmathematical,
  title={Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement}, 
  author={An Yang and Beichen Zhang and Binyuan Hui and Bofei Gao and Bowen Yu and Chengpeng Li and Dayiheng Liu and Jianhong Tu and Jingren Zhou and Junyang Lin and Keming Lu and Mingfeng Xue and Runji Lin and Tianyu Liu and Xingzhang Ren and Zhenru Zhang},
  journal={arXiv preprint arXiv:2409.12122},
  year={2024}
}
Downloads last month
396
Safetensors
Model size
7.62B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for open-r1/Qwen2.5-Math-7B-RoPE-300k

Base model

Qwen/Qwen2.5-7B
Finetuned
(214)
this model