FP8-Dynamic quantization using llmcompressor. Run with:

vllm serve qingy2024/gemma-3-27b-it-FP8-Dynamic --max-model-len 4096
Downloads last month
27
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for qingy2024/gemma-3-27b-it-FP8-Dynamic

Quantized
(48)
this model