--- license: gemma library_name: transformers pipeline_tag: image-text-to-text extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license base_model: google/gemma-3-4b-it tags: - TensorBlock - GGUF ---
TensorBlock

Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server

## google/gemma-3-4b-it - GGUF This repo contains GGUF format model files for [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4882](https://github.com/ggml-org/llama.cpp/commit/be7c3034108473beda214fd1d7c98fd6a7a3bdf5). ## Our projects
Awesome MCP Servers TensorBlock Studio
Project A Project B
A comprehensive collection of Model Context Protocol (MCP) servers. A lightweight, open, and extensible multi-LLM interaction studio.
👀 See what we built 👀 👀 See what we built 👀
## Prompt template ``` user {system_prompt} {prompt} model ``` ## Model file specification | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [gemma-3-4b-it-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q2_K.gguf) | Q2_K | 1.729 GB | smallest, significant quality loss - not recommended for most purposes | | [gemma-3-4b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q3_K_S.gguf) | Q3_K_S | 1.937 GB | very small, high quality loss | | [gemma-3-4b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q3_K_M.gguf) | Q3_K_M | 2.098 GB | very small, high quality loss | | [gemma-3-4b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q3_K_L.gguf) | Q3_K_L | 2.236 GB | small, substantial quality loss | | [gemma-3-4b-it-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q4_0.gguf) | Q4_0 | 2.363 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [gemma-3-4b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q4_K_S.gguf) | Q4_K_S | 2.378 GB | small, greater quality loss | | [gemma-3-4b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q4_K_M.gguf) | Q4_K_M | 2.490 GB | medium, balanced quality - recommended | | [gemma-3-4b-it-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q5_0.gguf) | Q5_0 | 2.764 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [gemma-3-4b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q5_K_S.gguf) | Q5_K_S | 2.764 GB | large, low quality loss - recommended | | [gemma-3-4b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q5_K_M.gguf) | Q5_K_M | 2.830 GB | large, very low quality loss - recommended | | [gemma-3-4b-it-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q6_K.gguf) | Q6_K | 3.191 GB | very large, extremely low quality loss | | [gemma-3-4b-it-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-3-4b-it-GGUF/blob/main/gemma-3-4b-it-Q8_0.gguf) | Q8_0 | 4.130 GB | very large, extremely low quality loss - not recommended | ## Downloading instruction ### Command line Firstly, install Huggingface Client ```shell pip install -U "huggingface_hub[cli]" ``` Then, downoad the individual model file the a local directory ```shell huggingface-cli download tensorblock/gemma-3-4b-it-GGUF --include "gemma-3-4b-it-Q2_K.gguf" --local-dir MY_LOCAL_DIR ``` If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: ```shell huggingface-cli download tensorblock/gemma-3-4b-it-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' ```