Kimina-Prover-Distill-1.7B-F32-GGUF

AI-MO/Kimina-Prover-Distill-1.7B is a theorem proving model developed by Project Numina and Kimi teams, focusing on competition style problem solving capabilities in Lean 4. It is a distillation of Kimina-Prover-72B, a model trained via large scale reinforcement learning. It achieves 72.95% accuracy with Pass@32 on MiniF2F-test.

Model Files

File Name Size Format
Kimina-Prover-Distill-1.7B.gitattributes 2.48 kB Git attributes
Kimina-Prover-Distill-1.7B.BF16.gguf 3.45 GB GGUF BF16
Kimina-Prover-Distill-1.7B.F16.gguf 3.45 GB GGUF F16
Kimina-Prover-Distill-1.7B.F32.gguf 6.89 GB GGUF F32
Kimina-Prover-Distill-1.7B.Q2_K.gguf 778 MB GGUF Q2_K
Kimina-Prover-Distill-1.7B.Q3_K_L.gguf 1 GB GGUF Q3_K_L
Kimina-Prover-Distill-1.7B.Q3_K_M.gguf 940 MB GGUF Q3_K_M
Kimina-Prover-Distill-1.7B.Q3_K_S.gguf 867 MB GGUF Q3_K_S
Kimina-Prover-Distill-1.7B.Q4_K_M.gguf 1.11 GB GGUF Q4_K_M
Kimina-Prover-Distill-1.7B.Q4_K_S.gguf 1.06 GB GGUF Q4_K_S
Kimina-Prover-Distill-1.7B.Q5_K_M.gguf 1.26 GB GGUF Q5_K_M
Kimina-Prover-Distill-1.7B.Q5_K_S.gguf 1.23 GB GGUF Q5_K_S
Kimina-Prover-Distill-1.7B.Q6_K.gguf 1.42 GB GGUF Q6_K
Kimina-Prover-Distill-1.7B.Q8_0.gguf 1.83 GB GGUF Q8_0
README.md 180 Bytes Markdown
config.json 29 Bytes JSON

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
782
GGUF
Model size
1.72B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/Kimina-Prover-Distill-1.7B-F32-GGUF

Finetuned
Qwen/Qwen3-1.7B
Quantized
(5)
this model