Confucius3-Math-GGUF

This model was converted to GGUF format from netease-youdao/Confucius3-Math using llama.cpp. Refer to the original model card for more details on the model.

We provide multiple versions of GGUF, which are stored in the corresponding subdirectories respectively. However, it should be noted that we have only evaluated the quality of the BF16 precision.

Use with llama.cpp

Before running the model, please compile and install llama.cpp first.

Merge the model files

Since the models we uploaded have been sliced, you need to execute the following commands to merge the models before running them.

./build/bin/llama-gguf-split --merge netease-youdao/Confucius3-Math-GGUF/confucius3-math-bf16-00001-of-00008.gguf confucius3-math-bf16.gguf

Run conversation

./build/bin/llama-cli -m confucius3-math-bf16.gguf

Citation

If you find our work helpful, feel free to give us a cite.

@misc{confucius3-math,
   author = {NetEase Youdao Team},
   title = {Confucius3-Math: A Lightweight High-Performance Reasoning LLM for Chinese K-12 Mathematics Learning},
   url = {https://arxiv.org/abs/2506.18330},
   month = {June},
   year = {2025}
 }
Downloads last month
9
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for netease-youdao/Confucius3-Math-GGUF