YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

PCL-Reasoner-V1

Model Overview

We release ​PCL-Reasoner-V1​, a model trained based on ​Qwen2.5-32B-Base​ and undergoes high-performance supervised fine-tuning based on the ​MindSpore framework​ and ​Ascend hardware. After fine-tuning, the model demonstrates significant improvements in mathematical reasoning capabilities. PCL-Reasoner-V1 achieves 85.7% and 84.2% respectively on AIME 24 and AIME 25, which position PCL-Reasoner-V1 among the top-tier models in the 32B parameter class on AIME24/25.

eval_results

We have fully open-sourced the model weights, dataset and training code. Follow the tutorial below to deploy and explore post-training!

Code

https://github.com/PCL-Reasoner/V1

https://openi.pcl.ac.cn/PCL-Reasoner/V1

Evaluation

We used the ​Avg@32 metric​ (averaging 32 sampling attempts per query) for evaluation.

Parameter Size Model Name AIME 24 AIME 25
>100B
DeepSeek-R1 79.8 70
DeepSeek-R1-0528 91.4 87.5
Qwen3-235B-A22B 85.7 81.5
OpenAI-o3 91.6 88.9
Gemini-2.5-Pro-0506 90.8 83
32B
Qwen3-32B 81.4 72.9
QwQ-32B 79.5 69.5
DeepSeek-R1-Distill-Qwen-32B 72.6 49.6
Skywork-OR1-32B 82.2 73.3
AM-Thinking-v1 85.3 74.4
PCL-Reasoner-v1 85.7 84.2
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support