SmolLM2-1.7B-Instruct-GGUF

SmolLM2-1.7B-Instruct : The 1.7B variant demonstrates significant advances over its predecessor SmolLM1-1.7B, particularly in instruction following, knowledge, reasoning, and mathematics. It was trained on 11 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new mathematics and coding datasets that we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback.

Model Files

File Name Size Format Description
SmolLM2-1.7B-Instruct.F32.gguf 6.85 GB Full precision (32-bit floating point)
SmolLM2-1.7B-Instruct.BF16.gguf 3.42 GB Brain floating point 16-bit
SmolLM2-1.7B-Instruct.F16.gguf 3.42 GB Half precision (16-bit floating point)
SmolLM2-1.7B-Instruct.Q8_0.gguf 1.82 GB 8-bit quantization
SmolLM2-1.7B-Instruct.Q6_K.gguf 1.41 GB 6-bit quantization (K-quant)
SmolLM2-1.7B-Instruct.Q5_K_M.gguf 1.23 GB 5-bit quantization (K-quant, medium)
SmolLM2-1.7B-Instruct.Q5_K_S.gguf 1.19 GB 5-bit quantization (K-quant, small)
SmolLM2-1.7B-Instruct.Q4_K_M.gguf 1.06 GB 4-bit quantization (K-quant, medium)
SmolLM2-1.7B-Instruct.Q4_K_S.gguf 999 MB 4-bit quantization (K-quant, small)
SmolLM2-1.7B-Instruct.Q3_K_L.gguf 933 MB 3-bit quantization (K-quant, large)
SmolLM2-1.7B-Instruct.Q3_K_M.gguf 860 MB 3-bit quantization (K-quant, medium)
SmolLM2-1.7B-Instruct.Q3_K_S.gguf 777 MB 3-bit quantization (K-quant, small)
SmolLM2-1.7B-Instruct.Q2_K.gguf 675 MB 2-bit quantization (K-quant)

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
95
GGUF
Model size
1.71B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/SmolLM2-1.7B-F32-GGUF

Quantized
(84)
this model

Collection including prithivMLmods/SmolLM2-1.7B-F32-GGUF