Telescopium-Acyclic-Qwen3-0.6B-GGUF

Telescopium-Acyclic-Qwen3-0.6B is a high-efficiency, multi-domain AI model fine-tuned on Qwen-0.6B that specializes in symbolic reasoning across mathematics, programming, and scientific domains. Leveraging a Directed Acyclic Graph (DAG) based reasoning methodology inspired by deepseek-r1 traces, it excels in decomposing complex problems into logical multi-step solutions. Featuring enhanced code expert clusters and an open code reasoning dataset, the model delivers unified outputs in LaTeX, Markdown, and various structured formats. Optimized for deployment on mid-range GPUs and edge AI systems, Telescopium-Acyclic-Qwen3-0.6B empowers developers, educators, and researchers with precise, step-by-step analytical reasoning for STEM tasks, algorithm synthesis, and technical documentation, while maintaining a lightweight footprint for versatile offline and cluster environments.

Model Files

File Name Quant Type File Size
Telescopium-Acyclic-Qwen3-0.6B.BF16.gguf BF16 1.2 GB
Telescopium-Acyclic-Qwen3-0.6B.F16.gguf F16 1.2 GB
Telescopium-Acyclic-Qwen3-0.6B.F32.gguf F32 2.39 GB
Telescopium-Acyclic-Qwen3-0.6B.Q2_K.gguf Q2_K 296 MB
Telescopium-Acyclic-Qwen3-0.6B.Q3_K_L.gguf Q3_K_L 368 MB
Telescopium-Acyclic-Qwen3-0.6B.Q3_K_M.gguf Q3_K_M 347 MB
Telescopium-Acyclic-Qwen3-0.6B.Q3_K_S.gguf Q3_K_S 323 MB
Telescopium-Acyclic-Qwen3-0.6B.Q4_K_M.gguf Q4_K_M 397 MB
Telescopium-Acyclic-Qwen3-0.6B.Q4_K_S.gguf Q4_K_S 383 MB
Telescopium-Acyclic-Qwen3-0.6B.Q5_K_M.gguf Q5_K_M 444 MB
Telescopium-Acyclic-Qwen3-0.6B.Q5_K_S.gguf Q5_K_S 437 MB
Telescopium-Acyclic-Qwen3-0.6B.Q6_K.gguf Q6_K 495 MB
Telescopium-Acyclic-Qwen3-0.6B.Q8_0.gguf Q8_0 639 MB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
2,431
GGUF
Model size
596M params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Telescopium-Acyclic-Qwen3-0.6B-GGUF

Finetuned
Qwen/Qwen3-0.6B
Quantized
(2)
this model

Collection including prithivMLmods/Telescopium-Acyclic-Qwen3-0.6B-GGUF