lewtun's picture
lewtun HF Staff
Update README.md
78e8ceb verified
metadata
base_model: Qwen/Qwen2.5-Math-7B
language:
  - en
pipeline_tag: text-generation
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Math-7B/blob/main/LICENSE

Qwen2.5-Math-7B-RoPE-300k

This model is a variant of Qwen/Qwen2.5-Math-7B, whose RoPE base frequency was increased to 300k in order to extend the model's context from 4k to 32k tokens.

Citation

If you find this model useful in your work, please cite the original source:

@article{yang2024qwen25mathtechnicalreportmathematical,
  title={Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement}, 
  author={An Yang and Beichen Zhang and Binyuan Hui and Bofei Gao and Bowen Yu and Chengpeng Li and Dayiheng Liu and Jianhong Tu and Jingren Zhou and Junyang Lin and Keming Lu and Mingfeng Xue and Runji Lin and Tianyu Liu and Xingzhang Ren and Zhenru Zhang},
  journal={arXiv preprint arXiv:2409.12122},
  year={2024}
}