Llama-4-Scout-17B-16E-Instruct-GGUF
Original Model
unsloth/Llama-4-Scout-17B-16E-Instruct
Run with Gaianet
Prompt template:
prompt template: llama-4-chat
Context size:
chat_ctx_size: 10M
Run with GaiaNet:
Quick start: https://docs.gaianet.ai/node-guide/quick-start
Customize your node: https://docs.gaianet.ai/node-guide/customize
Quantized with llama.cpp b5074
- Downloads last month
- 2,390
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for gaianet/Llama-4-Scout-17B-16E-Instruct-GGUF
Base model
meta-llama/Llama-4-Scout-17B-16E
Finetuned
unsloth/Llama-4-Scout-17B-16E-Instruct