---
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE.md
model_creator: black-forest-labs
model_name: FLUX.1-dev
quantized_by: Second State Inc.
language:
- en
tags:
- text-to-image
- image-generation
- flux
---
# FLUX.1-dev-GGUF
## Original Model
[black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
## Run with `sd-api-server`
- sd-api-server version: [0.1.4](https://github.com/LlamaEdge/sd-api-server/releases/tag/0.1.4)
- Run as LlamaEdge service
```bash
wasmedge --dir .:. sd-api-server.wasm \
--model-name flux1-dev \
--diffusion-model flux1-dev-Q4_0.gguf \
--vae ae.safetensors \
--clip-l clip_l.safetensors \
--t5xxl t5xxl-Q8_0.gguf
```
- Run with LoRA
Assume that the LoRA model is located in the `lora-models` directory
```bash
wasmedge --dir .:. \
--dir lora-models:lora-models \
sd-api-server.wasm \
--model-name flux1-dev \
--diffusion-model flux1-dev-Q4_0.gguf \
--vae ae.safetensors \
--clip-l clip_l.safetensors \
--t5xxl t5xxl-Q8_0.gguf \
--lora-model-dir lora-models
```
*For details, see https://github.com/LlamaEdge/sd-api-server/blob/main/examples/flux_with_lora.md*
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [ae.safetensors](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/ae.safetensors) | f32 | 32 | 335 MB | |
| [clip_l-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/clip_l-Q8_0.gguf) | Q8_0 | 8 | 131 MB | |
| [clip_l.safetensors](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/clip_l.safetensors) | f16 | 16 | 246 MB | |
| [flux1-dev-Q2_K.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/flux1-dev-Q2_K.gguf) | Q2_K | 2 | 4.15 GB | |
| [flux1-dev-Q3_K.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/flux1-dev-Q3_K.gguf) | Q3_K | 3 | 5.35 GB | |
| [flux1-dev-Q4_0.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/flux1-dev-Q4_0.gguf) | Q4_0 | 4 | 6.93 GB | |
| [flux1-dev-Q4_1.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/flux1-dev-Q4_1.gguf) | Q4_1 | 4 | 7.67 GB | |
| [flux1-dev-Q4_K.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/flux1-dev-Q4_K.gguf) | Q4_K | 4 | 6.93 GB | |
| [flux1-dev-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/flux1-dev-Q5_0.gguf) | Q5_0 | 5 | 8.40 GB | |
| [flux1-dev-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/flux1-dev-Q5_1.gguf) | Q5_1 | 5 | 9.14 GB | |
| [flux1-dev-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/flux1-dev-Q8_0.gguf) | Q8_0 | 8 | 12.6 GB | |
| [flux1-dev.safetensors](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/flux1-dev.safetensors) | f16 | 16 | 23.8 GB | |
| [t5xxl-Q2_K.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/t5xxl-Q2_K.gguf) | Q2_K | 2 | 1.61 GB | |
| [t5xxl-Q3_K.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/t5xxl-Q3_K.gguf) | Q3_K | 3 | 2.10 GB | |
| [t5xxl-Q4_0.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/t5xxl-Q4_0.gguf) | Q4_0 | 4 | 2.75 GB | |
| [t5xxl-Q4_1.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/t5xxl-Q4_1.gguf) | Q4_0 | 4 | 3.06 GB | |
| [t5xxl-Q4_K.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/t5xxl-Q4_K.gguf) | Q4_K | 4 | 2.75 GB | |
| [t5xxl-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/t5xxl-Q5_0.gguf) | Q5_0 | 5 | 3.36 GB | |
| [t5xxl-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/t5xxl-Q5_1.gguf) | Q5_1 | 5 | 3.67 GB | |
| [t5xxl-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/t5xxl-Q8_0.gguf) | Q8_0 | 8 | 5.20 GB | |
| [t5xxl_fp16.safetensors](https://huggingface.co/second-state/FLUX.1-dev-GGUF/blob/main/t5xxl_fp16.safetensors) | f16 | 16 | 9.79 GB | |
**Quantized with stable-diffusion.cpp `master-e71ddce`.**