Flux1 Dev Kontext in BF16 GGUFs created with llama.cpp version b5873 and latest patch.

Downloads last month
347
GGUF
Model size
11.9B params
Architecture
flux
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ND911/Flux1-Kontext-Dev-BF16_ggufs

Quantized
(12)
this model