---
tags:
- fp8
- vllm
license: apache-2.0
license_link: https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md
language:
- en
base_model: ibm-granite/granite-3.1-8b-instruct
library_name: transformers
---
Granite-3.1-8b-instruct-FP8-dynamic
## Model Overview
- **Model Architecture:** granite-3.1-8b-instruct
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 1/8/2025
- **Version:** 1.0
- **Model Developers:** Neural Magic
Quantized version of [ibm-granite/granite-3.1-8b-instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct).
It achieves an average score of 70.57 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 70.30.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [ibm-granite/granite-3.1-8b-instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
max_model_len, tp_size = 4096, 1
model_name = "neuralmagic/granite-3.1-8b-instruct-FP8-dynamic"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
messages_list = [
[{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
]
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
Deploy on Red Hat AI Inference Server
```bash
$ podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 1 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/granite-3.1-8b-instruct-FP8-dynamic
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
Deploy on Red Hat Enterprise Linux AI
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/granite-3-1-8b-instruct-fp8-dynamic:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/granite-3-1-8b-instruct-fp8-dynamic -- --trust-remote-code
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/granite-3-1-8b-instruct-fp8-dynamic
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
Deploy on Red Hat Openshift AI
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: granite-3-1-8b-instruct-fp8-dynamic # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: granite-3-1-8b-instruct-fp8-dynamic # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
args:
- '--trust-remote-code'
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: registry.redhat.io/rhelai1/modelcar-granite-3-1-8b-instruct-fp8-dynamic:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace and below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://-predictor-default./v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "Llama-4-Maverick-17B-128E-Instruct-FP8",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
Model Creation Code
```bash
python quantize.py --model_id ibm-granite/granite-3.1-8b-instruct --save_path "output_dir/"
```
```python
import argparse
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
import os
def main():
parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')
parser.add_argument('--model_id', type=str, required=True,
help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-8B-Instruct")')
parser.add_argument('--save_path', type=str, default='.',
help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic')
args = parser.parse_args()
# Load model
model = AutoModelForCausalLM.from_pretrained(
args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(args.model_id)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
)
# Apply quantization
oneshot(model=model, recipe=recipe)
save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic")
os.makedirs(save_path, exist_ok=True)
# Save to disk in compressed-tensors format
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
if __name__ == "__main__":
main()
```
## Evaluation
The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
Evaluation Commands
OpenLLM Leaderboard V1:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/granite-3.1-8b-instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks openllm \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
OpenLLM Leaderboard V2:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/granite-3.1-8b-instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks leaderboard \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
#### HumanEval
##### Generation
```
python3 codegen/generate.py \
--model neuralmagic/granite-3.1-8b-instruct-FP8-dynamic \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval
```
##### Sanitization
```
python3 evalplus/sanitize.py \
humaneval/neuralmagic--granite-3.1-8b-instruct-FP8-dynamic_vllm_temp_0.2
```
##### Evaluation
```
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/neuralmagic--granite-3.1-8b-instruct-FP8-dynamic_vllm_temp_0.2-sanitized
```
### Accuracy
Category |
Metric |
ibm-granite/granite-3.1-8b-instruct |
neuralmagic/granite-3.1-8b-instruct-FP8-dynamic |
Recovery (%) |
OpenLLM V1 |
ARC-Challenge (Acc-Norm, 25-shot) |
66.81 |
66.81 |
100.00 |
GSM8K (Strict-Match, 5-shot) |
64.52 |
66.64 |
103.29 |
HellaSwag (Acc-Norm, 10-shot) |
84.18 |
84.16 |
99.98 |
MMLU (Acc, 5-shot) |
65.52 |
65.36 |
99.76 |
TruthfulQA (MC2, 0-shot) |
60.57 |
60.52 |
99.92 |
Winogrande (Acc, 5-shot) |
80.19 |
79.95 |
99.70 |
Average Score |
70.30 |
70.57 |
100.39 |
OpenLLM V2 |
IFEval (Inst Level Strict Acc, 0-shot) |
74.10 |
73.62 |
99.35 |
BBH (Acc-Norm, 3-shot) |
53.19 |
53.26 |
100.13 |
Math-Hard (Exact-Match, 4-shot) |
14.77 |
16.79 |
113.66 |
GPQA (Acc-Norm, 0-shot) |
31.76 |
32.58 |
102.58 |
MUSR (Acc-Norm, 0-shot) |
46.01 |
47.34 |
102.89 |
MMLU-Pro (Acc, 5-shot) |
35.81 |
35.72 |
99.75 |
Average Score |
42.61 |
43.22 |
101.43 |
Coding |
HumanEval Pass@1 |
71.00 |
69.90 |
98.45 |
## Inference Performance
This model achieves up to 1.5x speedup in single-stream deployment and up to 1.1x speedup in multi-stream asynchronous deployment on L40 GPUs.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.6.post1, and [GuideLLM](https://github.com/neuralmagic/guidellm).
Benchmarking Command
```
guidellm --model neuralmagic/granite-3.1-8b-instruct-FP8-dynamic --target "http://localhost:8000/v1" --data-type emulated --data "prompt_tokens=,generated_tokens=" --max seconds 360 --backend aiohttp_server
```
### Single-stream performance (measured with vLLM version 0.6.6.post1)
|
|
|
Latency (s) |
GPU class |
Model |
Speedup |
Code Completion prefill: 256 tokens decode: 1024 tokens |
Docstring Generation prefill: 768 tokens decode: 128 tokens |
Code Fixing prefill: 1024 tokens decode: 1024 tokens |
RAG prefill: 1024 tokens decode: 128 tokens |
Instruction Following prefill: 256 tokens decode: 128 tokens |
Multi-turn Chat prefill: 512 tokens decode: 256 tokens |
Large Summarization prefill: 4096 tokens decode: 512 tokens |
L40 |
granite-3.1-8b-instruct |
|
25.1 |
3.2 |
25.3 |
3.2 |
3.2 |
6.3 |
13.4 |
granite-3.1-8b-instruct-FP8-dynamic (this model) |
1.47 |
16.8 |
2.2 |
17.1 |
2.2 |
2.1 |
4.2 |
9.3 |
granite-3.1-8b-instruct-quantized.w4a16 |
2.72 |
8.9 |
1.2 |
9.2 |
1.2 |
1.1 |
2.3 |
5.3 |
### Multi-stream asynchronous performance (measured with vLLM version 0.6.6.post1)
|
|
|
Maximum Throughput (Queries per Second) |
GPU class |
Model |
Speedup |
Code Completion prefill: 256 tokens decode: 1024 tokens |
Docstring Generation prefill: 768 tokens decode: 128 tokens |
Code Fixing prefill: 1024 tokens decode: 1024 tokens |
RAG prefill: 1024 tokens decode: 128 tokens |
Instruction Following prefill: 256 tokens decode: 128 tokens |
Multi-turn Chat prefill: 512 tokens decode: 256 tokens |
Large Summarization prefill: 4096 tokens decode: 512 tokens |
L40 |
granite-3.1-8b-instruct |
|
1.4 |
7.8 |
1.1 |
6.2 |
15.5 |
6.0 |
0.7 |
granite-3.1-8b-instruct-FP8-dynamic (this model) |
1.12 |
2.1 |
7.4 |
1.3 |
5.9 |
15.3 |
6.9 |
0.8 |
granite-3.1-2b-instruct-quantized.w4a16 |
1.29 |
2.4 |
8.9 |
1.4 |
7.1 |
17.8 |
7.8 |
1.0 |