This GGUF file is a direct conversion of DiffSynth-Studio/Qwen-Image-Distill-Full
Type | Name | Location | Download |
---|---|---|---|
Main Model | Qwen-Image | ComfyUI/models/unet |
GGUF (this repo) |
Text Encoder | Qwen2.5-VL-7B | ComfyUI/models/text_encoders |
Safetensors / GGUF |
VAE | Qwen-Image VAE | ComfyUI/models/vae |
Safetensors |
Since this is a quantized model, all original licensing terms and usage restrictions remain in effect.
Usage
The model can be used with the ComfyUI custom node ComfyUI-GGUF by city96
Place model files in ComfyUI/models/unet
see the GitHub readme for further installation instructions.
- Downloads last month
- 7,043
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for QuantStack/Qwen-Image-Distill-GGUF
Base model
Qwen/Qwen-Image