gemma-3-1b-it-GGUF / README.md
apepkuss79's picture
Upload README.md with huggingface_hub
f821e2f verified
|
raw
history blame
4.2 kB
metadata
base_model: google/gemma-3-1b-it
inference: false
library_name: transformers
license: gemma
model_creator: Google
model_name: gemma-3-1b-it
quantized_by: Second State Inc.
pipeline_tag: text-generation

gemma-3-1b-it-GGUF

Original Model

google/gemma-3-1b-it

Run with LlamaEdge

  • LlamaEdge version: coming soon
  • Context size: 128000

Quantized GGUF Models

Name Quant method Bits Size Use case
gemma-3-1b-it-Q2_K.gguf Q2_K 2 690 MB smallest, significant quality loss - not recommended for most purposes
gemma-3-1b-it-Q3_K_L.gguf Q3_K_L 3 752 MB small, substantial quality loss
gemma-3-1b-it-Q3_K_M.gguf Q3_K_M 3 722 MB very small, high quality loss
gemma-3-1b-it-Q3_K_S.gguf Q3_K_S 3 689 MB very small, high quality loss
gemma-3-1b-it-Q4_0.gguf Q4_0 4 720 MB legacy; small, very high quality loss - prefer using Q3_K_M
gemma-3-1b-it-Q4_K_M.gguf Q4_K_M 4 806 MB medium, balanced quality - recommended
gemma-3-1b-it-Q4_K_S.gguf Q4_K_S 4 781 MB small, greater quality loss
gemma-3-1b-it-Q5_0.gguf Q5_0 5 808 MB legacy; medium, balanced quality - prefer using Q4_K_M
gemma-3-1b-it-Q5_K_M.gguf Q5_K_M 5 851 MB large, very low quality loss - recommended
gemma-3-1b-it-Q5_K_S.gguf Q5_K_S 5 836 MB large, low quality loss - recommended
gemma-3-1b-it-Q6_K.gguf Q6_K 6 1.01 GB very large, extremely low quality loss
gemma-3-1b-it-Q8_0.gguf Q8_0 8 1.07 GB very large, extremely low quality loss - not recommended
gemma-3-1b-it-f16.gguf f16 16 2.01 GB

Quantized with llama.cpp b4875