prithivMLmods's picture
Update README.md
c8e315a verified
metadata
license: apache-2.0
language:
  - en
base_model:
  - prithivMLmods/Megatron-Bots-1.7B-Reasoning
pipeline_tag: text-generation
library_name: transformers
tags:
  - text-generation-inference

Megatron-Bots-1.7B-Reasoning-GGUF

Megatron-Bots-1.7B-Reasoning is a logical reasoning and general-purpose thinking model fine-tuned from Qwen3-1.7B, specifically designed for advanced reasoning tasks and analytical problem-solving. Built with data entries from the SynLogic Dataset, it excels at structured thinking, logical deduction, and comprehensive problem analysis in a compact yet powerful architecture.

Model Files

File Name Size Format Description
Megatron-Bots-1.7B-Reasoning.F32.gguf 6.89 GB F32 Full precision 32-bit floating point
Megatron-Bots-1.7B-Reasoning.F16.gguf 3.45 GB F16 Half precision 16-bit floating point
Megatron-Bots-1.7B-Reasoning.BF16.gguf 3.45 GB BF16 Brain floating point 16-bit
Megatron-Bots-1.7B-Reasoning.Q8_0.gguf 1.83 GB Q8_0 8-bit quantized
Megatron-Bots-1.7B-Reasoning.Q6_K.gguf 1.42 GB Q6_K 6-bit quantized
Megatron-Bots-1.7B-Reasoning.Q5_K_M.gguf 1.26 GB Q5_K_M 5-bit quantized, medium quality
Megatron-Bots-1.7B-Reasoning.Q5_K_S.gguf 1.23 GB Q5_K_S 5-bit quantized, small quality
Megatron-Bots-1.7B-Reasoning.Q4_K_M.gguf 1.11 GB Q4_K_M 4-bit quantized, medium quality
Megatron-Bots-1.7B-Reasoning.Q4_K_S.gguf 1.06 GB Q4_K_S 4-bit quantized, small quality
Megatron-Bots-1.7B-Reasoning.Q3_K_L.gguf 1 GB Q3_K_L 3-bit quantized, large quality
Megatron-Bots-1.7B-Reasoning.Q3_K_M.gguf 940 MB Q3_K_M 3-bit quantized, medium quality
Megatron-Bots-1.7B-Reasoning.Q3_K_S.gguf 867 MB Q3_K_S 3-bit quantized, small quality
Megatron-Bots-1.7B-Reasoning.Q2_K.gguf 778 MB Q2_K 2-bit quantized

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png