More details and evals coming soon...
Sanity check - GSM8k eval
meta-llama/Llama-4-Scout-17B-16E-Instruct
unquantized baseline
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.9189 | ± | 0.0075 |
strict-match | 5 | exact_match | ↑ | 0.9014 | ± | 0.0082 |
RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic
FP8 quantized (this model)
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
gsm8k | 3 | flexible-extract | 5 | exact_match | ↑ | 0.9219 | ± | 0.0074 |
strict-match | 5 | exact_match | ↑ | 0.9075 | ± | 0.0080 |
- Downloads last month
- 365
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic
Base model
meta-llama/Llama-4-Scout-17B-16E