T5 and Clip are still not provided in the original model

FLUX.1-Canny-dev-GGUF

Original Model

black-forest-labs/FLUX.1-Canny-dev

Run with LlamaEdge-StableDiffusion

  • Version: coming soon

Quantized GGUF Models

Name Quant method Bits Size Use case
ae.safetensors f32 32 335 MB
flux1-canny-dev-Q2_K.gguf Q2_K 2 4.15 GB
flux1-canny-dev-Q3_K.gguf Q3_K 3 5.35 GB
flux1-canny-dev-Q4_0.gguf Q4_0 4 6.93 GB
flux1-canny-dev-Q4_1.gguf Q4_1 4 7.67 GB
flux1-canny-dev-Q4_K.gguf Q4_K 4 6.93 GB
flux1-canny-dev-Q5_0.gguf Q5_0 5 8.40 GB
flux1-canny-dev-Q5_1.gguf Q5_1 5 9.14 GB
flux1-canny-dev-Q8_0.gguf Q8_0 8 12.6 GB
flux1-canny-dev.safetensors f16 16 23.8 GB

Quantized with stable-diffusion.cpp master-c3eeb669.

Downloads last month
1,338
GGUF
Model size
11.9B params
Architecture
undefined

2-bit

3-bit

4-bit

5-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for second-state/FLUX.1-Canny-dev-GGUF

Quantized
(2)
this model