Qwen3-32B-NVFP4A16
Model Overview
- Model Architecture: Qwen/Qwen3-8B
- Input: Text
- Output: Text
- Model Optimizations:
- Weight quantization: FP4
- Activation quantization: FP16
- Out-of-scope: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
- Release Date: 6/25/2025
- Version: 10
- Model Developers: ELVISIO (Thanks to RedHatAI)
This model is a quantized version of Qwen/Qwen3-8B. It was evaluated on a several tasks to assess the its quality in comparison to the unquatized model.
Model Optimizations
This model was obtained by quantizing the weights of Qwen/Qwen3-8B to FP4 data type, ready for inference with vLLM>=9.1 This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 25%.
Only the weights of the linear operators within transformers blocks are quantized using LLM Compressor.
Deployment
Use with vLLM
This model can be deployed efficiently using the vLLM backend, as shown in the example below.
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "ELVISIO/Qwen3-8B-NVFP4A16"
number_gpus = 1
sampling_params = SamplingParams(temperature=6, top_p=9, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
prompts = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompts, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
docker run -d \
--name vllm-gpu \
--runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--add-host="host.docker.internal:host-gateway" \
-p 8000:8000 \
--ipc=host \
--restart=always \
vllm/vllm-openai:v0.10.0 \
--model ELVISIO/Qwen3-8B-NVFP4A16 \
--served-model-name qwen3:8b \
--rope-scaling '{"rope_type":"yarn","factor":1.6,"original_max_position_embeddings":32768}' \
--max-model-len 52400 \
--host 0.0.0.0 \
--port 8000 \
--reasoning-parser qwen3 \
--gpu-memory-utilization 0.9
vLLM aslo supports OpenAI-compatible serving. See the documentation for more details.
Creation
This model was created by applying LLM Compressor with calibration samples from UltraChat, as presented in the code snipet below.
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.utils import dispatch_for_generation
MODEL_ID = "Qwen/Qwen3-8B"
# Load model.
model = AutoModelForCausalLM.from_pretrained(MODEL_ID, torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
DATASET_ID = "HuggingFaceH4/ultrachat_200k"
DATASET_SPLIT = "train_sft"
# Select number of samples. 512 samples is a good place to start.
# Increasing the number of samples can improve accuracy.
NUM_CALIBRATION_SAMPLES = 512
MAX_SEQUENCE_LENGTH = 2048
# Load dataset and preprocess.
ds = load_dataset(DATASET_ID, split=f"{DATASET_SPLIT}[:{NUM_CALIBRATION_SAMPLES}]")
ds = ds.shuffle(seed=42)
def preprocess(example):
return {
"text": tokenizer.apply_chat_template(
example["messages"],
tokenize=False,
)
}
ds = ds.map(preprocess)
# Tokenize inputs.
def tokenize(sample):
return tokenizer(
sample["text"],
padding=False,
max_length=MAX_SEQUENCE_LENGTH,
truncation=True,
add_special_tokens=False,
)
ds = ds.map(tokenize, remove_columns=ds.column_names)
# Configure the quantization algorithm and scheme.
# In this case, we:
# * quantize the weights to fp4 with per group 16 via ptq
recipe = QuantizationModifier(targets="Linear", scheme="NVFP4A16", ignore=["lm_head"])
# Save to disk in compressed-tensors format.
SAVE_DIR = MODEL_ID.rstrip("/").split("/")[-1] + "-NVFP4A16"
# Apply quantization.
oneshot(
model=model,
dataset=ds,
recipe=recipe,
max_seq_length=MAX_SEQUENCE_LENGTH,
num_calibration_samples=NUM_CALIBRATION_SAMPLES,
output_dir=SAVE_DIR,
)
print("\n\n")
print("========== SAMPLE GENERATION ==============")
dispatch_for_generation(model)
input_ids = tokenizer("Hello my name is", return_tensors="pt").input_ids.to("cuda")
output = model.generate(input_ids, max_new_tokens=100)
print(tokenizer.decode(output[0]))
print("==========================================\n\n")
model.save_pretrained(SAVE_DIR, save_compressed=True)
tokenizer.save_pretrained(SAVE_DIR)
- Downloads last month
- 3