
unsloth/gemma-3n-E2B-it-GGUF
Image-Text-to-Text
•
4B
•
Updated
•
104k
•
35
Not all quantized model perform good, Go to model page's discussion, You will find my comment with MMLU score (for detailed logs)
Note Score | Model | GGUF Size 36.43 Q4_0.gguf 2.97GB 41.43 Q8_0.gguf 4.79GB
Note Score | Model | GGUF Size 50.00 Q4_0.gguf 17.4GB
Note Score | Model | GGUF Size 56.43 Q4_0.gguf 12.4 GB
Note Score | Model | GGUF Size 71.2 Q2_K_S.gguf 10.7GB (from intel) 70.7 Q2_K.gguf 11.3GB (from unsloth) 76.0 Q8_0.gguf 32.5GB (from unsloth)