--- tags: - w4a16 - int4 - vllm license: apache-2.0 license_link: https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/apache-2.0.md language: - en base_model: ibm-granite/granite-3.1-8b-base library_name: transformers ---
Category | Metric | ibm-granite/granite-3.1-8b-base | neuralmagic/granite-3.1-8b-base-quantized.w4a16 | Recovery (%) |
---|---|---|---|---|
OpenLLM V1 | ARC-Challenge (Acc-Norm, 25-shot) | 64.68 | 62.37 | 96.43 |
GSM8K (Strict-Match, 5-shot) | 60.88 | 54.89 | 90.16 | |
HellaSwag (Acc-Norm, 10-shot) | 83.52 | 82.53 | 98.81 | |
MMLU (Acc, 5-shot) | 63.33 | 62.78 | 99.13 | |
TruthfulQA (MC2, 0-shot) | 51.33 | 51.30 | 99.94 | |
Winogrande (Acc, 5-shot) | 80.90 | 79.24 | 97.95 | |
Average Score | 67.44 | 65.52 | 97.15 | |
Coding | HumanEval Pass@1 | 44.10 | 40.70 | 92.28 |
Latency (s) | |||||||||
---|---|---|---|---|---|---|---|---|---|
GPU class | Model | Speedup | Code Completion prefill: 256 tokens decode: 1024 tokens |
Docstring Generation prefill: 768 tokens decode: 128 tokens |
Code Fixing prefill: 1024 tokens decode: 1024 tokens |
RAG prefill: 1024 tokens decode: 128 tokens |
Instruction Following prefill: 256 tokens decode: 128 tokens |
Multi-turn Chat prefill: 512 tokens decode: 256 tokens |
Large Summarization prefill: 4096 tokens decode: 512 tokens |
A5000 | granite-3.1-8b-base | 28.3 | 3.7 | 28.8 | 3.8 | 3.6 | 7.2 | 15.7 | |
granite-3.1-8b-base-quantized.w8a8 | 1.60 | 17.7 | 2.3 | 18.0 | 2.4 | 2.2 | 4.5 | 10.0 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
2.61 | 10.3 | 1.5 | 10.7 | 1.5 | 1.3 | 2.7 | 6.6 | |
A6000 | granite-3.1-8b-base | 25.8 | 3.4 | 26.2 | 3.4 | 3.3 | 6.5 | 14.2 | |
granite-3.1-8b-base-quantized.w8a8 | 1.50 | 17.4 | 2.3 | 16.9 | 2.2 | 2.2 | 4.4 | 9.8 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
2.48 | 10.0 | 1.4 | 10.4 | 1.5 | 1.3 | 2.5 | 6.2 | |
A100 | granite-3.1-8b-base | 13.6 | 1.8 | 13.7 | 1.8 | 1.7 | 3.4 | 7.3 | |
granite-3.1-8b-base-quantized.w8a8 | 1.31 | 10.4 | 1.3 | 10.5 | 1.4 | 1.3 | 2.6 | 5.6 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
1.80 | 7.3 | 1.0 | 7.4 | 1.0 | 0.9 | 1.9 | 4.3 | |
L40 | granite-3.1-8b-base | 25.1 | 3.2 | 25.3 | 3.2 | 3.2 | 6.3 | 13.4 | |
granite-3.1-8b-base-FP8-dynamic | 1.47 | 16.8 | 2.2 | 17.1 | 2.2 | 2.1 | 4.2 | 9.3 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
2.72 | 8.9 | 1.2 | 9.2 | 1.2 | 1.1 | 2.3 | 5.3 |
Maximum Throughput (Queries per Second) | |||||||||
---|---|---|---|---|---|---|---|---|---|
GPU class | Model | Speedup | Code Completion prefill: 256 tokens decode: 1024 tokens |
Docstring Generation prefill: 768 tokens decode: 128 tokens |
Code Fixing prefill: 1024 tokens decode: 1024 tokens |
RAG prefill: 1024 tokens decode: 128 tokens |
Instruction Following prefill: 256 tokens decode: 128 tokens |
Multi-turn Chat prefill: 512 tokens decode: 256 tokens |
Large Summarization prefill: 4096 tokens decode: 512 tokens |
A5000 | granite-3.1-8b-base | 0.8 | 3.1 | 0.4 | 2.5 | 6.7 | 2.7 | 0.3 | |
granite-3.1-8b-base-quantized.w8a8 | 1.71 | 1.3 | 5.2 | 0.9 | 4.0 | 10.5 | 4.4 | 0.5 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
1.46 | 1.3 | 3.9 | 0.8 | 2.9 | 8.2 | 3.6 | 0.5 | |
A6000 | granite-3.1-8b-base | 1.3 | 5.1 | 0.9 | 4.0 | 0.3 | 4.3 | 0.6 | |
granite-3.1-8b-base-quantized.w8a8 | 1.39 | 1.8 | 7.0 | 1.3 | 5.6 | 14.0 | 6.3 | 0.8 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
1.09 | 1.9 | 4.8 | 1.0 | 3.8 | 10.0 | 5.0 | 0.6 | |
A100 | granite-3.1-8b-base | 3.1 | 10.7 | 2.1 | 8.5 | 20.6 | 9.6 | 1.4 | |
granite-3.1-8b-base-quantized.w8a8 | 1.23 | 3.8 | 14.2 | 2.1 | 11.4 | 25.9 | 12.1 | 1.7 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
0.96 | 3.4 | 9.0 | 2.6 | 7.2 | 18.0 | 8.8 | 1.3 | |
L40 | granite-3.1-8b-base | 1.4 | 7.8 | 1.1 | 6.2 | 15.5 | 6.0 | 0.7 | |
granite-3.1-8b-base-FP8-dynamic | 1.12 | 2.1 | 7.4 | 1.3 | 5.9 | 15.3 | 6.9 | 0.8 | |
granite-3.1-8b-base-quantized.w4a16 (this model) |
1.29 | 2.4 | 8.9 | 1.4 | 7.1 | 17.8 | 7.8 | 1.0 |