---
tags:
- vllm
- vision
- audio
- fp8
license: mit
base_model: google/gemma-3n-E2B-it
library_name: transformers
---
# RedHatAI/gemma-3n-E2B-it-FP8-Dynamic
## Model Overview
- **Model Architecture:** gemma-3n-E2B-it
- **Input:** Audio-Vision-Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 08/01/2025
- **Version:** 1.0
- **Model Developers:** RedHatAI
Quantized version of [google/gemma-3n-E2B-it](https://huggingface.co/google/gemma-3n-E2B-it).
### Model Optimizations
This model was obtained by quantizing the weights of [google/gemma-3n-E2B-it](https://huggingface.co/google/gemma-3n-E2B-it) to FP8 data type, ready for inference with vLLM >= 0.10.0
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from vllm.assets.image import ImageAsset
from vllm import LLM, SamplingParams
# prepare model
llm = LLM(
model="RedHatAI/gemma-3n-E2B-it-FP8-Dynamic",
trust_remote_code=True,
max_model_len=4096,
max_num_seqs=2,
)
# prepare inputs
question = "What is the content of this image?"
inputs = {
"prompt": f"<|user|>\n<|image_1|>\n{question}<|end|>\n<|assistant|>\n",
"multi_modal_data": {
"image": ImageAsset("cherry_blossom").pil_image.convert("RGB")
},
}
# generate response
print("========== SAMPLE GENERATION ==============")
outputs = llm.generate(inputs, SamplingParams(temperature=0.2, max_tokens=64))
print(f"PROMPT : {outputs[0].prompt}")
print(f"RESPONSE: {outputs[0].outputs[0].text}")
print("==========================================")
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
Model Creation Code
```python
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
from transformers import AutoProcessor, Gemma3nForConditionalGeneration
# Load model.
model_id = "google/gemma-3n-E2B-it"
model = Gemma3nForConditionalGeneration.from_pretrained(model_id, torch_dtype="auto", device_map="auto")
processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
# Recipe
recipe = [
QuantizationModifier(
targets="Linear",
scheme="FP8_DYNAMIC",
ignore=[
"re:.*embed_audio.*",
"re:.*embed_vision.*",
"re:.*audio_tower.*",
"re:.*vision_tower.*",
"re:.*altup.*",
"re:.*lm_head.*",
"re:.*laurel.*",
"re:model\.language_model\.layers\.\d+\.per_layer_input_gate",
"re:model\.language_model\.layers\.\d+\.per_layer_projection",
"model.language_model.per_layer_model_projection",
],
),
]
SAVE_DIR = f"{model_id.split('/')[1]}-{recipe[0].scheme}"
# Perform oneshot
oneshot(
model=model,
tokenizer=model_id,
recipe=recipe,
trust_remote_code_model=True,
tie_word_embeddings=True,
output_dir=SAVE_DIR,
)
# Save to disk compressed.
model.save_pretrained(SAVE_DIR, save_compressed=True)
processor.save_pretrained(SAVE_DIR)
```
Evaluation Commands
### OpenLLM V1
```
lm_eval \
--model vllm \
--model_args pretrained="
Category | Metric | google/gemma-3n-E2B-it | FP8 Dynamic | Recovery (%) |
---|---|---|---|---|
OpenLLM V1 | arc_challenge | 50.60 | 50.09 | 99.00% |
gsm8k | 48.07 | 54.51 | 113.40% | |
hellaswag | 67.78 | 65.67 | 96.89% | |
mmlu | 59.92 | 60.16 | 100.40% | |
truthfulqa_mc2 | 49.98 | 49.48 | 99.00% | |
winogrande | 65.11 | 63.85 | 98.06% | |
Average | 56.91 | 57.29 | 100.67% | |
Leaderboard | bbh | 53.32 | 52.99 | 99.38% |
mmlu_pro | 29.76 | 29.36 | 98.66% | |
musr | 34.52 | 35.85 | 103.85% | |
ifeval | 80.22 | 80.58 | 100.45% | |
gpqa | 30.54 | 29.36 | 96.14% | |
math_hard | 34.52 | 34.97 | 101.30% | |
Average | 43.81 | 43.85 | 100.09% |