gguf quantized version of mochi (incl. gguf encoder and gguf vae)🐷🐷🐷
- drag mochi to >
./ComfyUI/models/diffusion_models
- drag t5xxl to >
./ComfyUI/models/text_encoders
- drag vae to >
./ComfyUI/models/vae
- drag demo video (below) to > your browser for workflow

- Prompt
- a pinky pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera

- Prompt
- a pinky pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera

- Prompt
- a pinky pig moving quickly in a beautiful winter scenery nature trees sunset tracking camera
review
- new tensor fixed version; load faster with full set gguf (model + encoder + decoder)
- upgraded encoder; from fp16/fp8 to fp32; but not affecting the file size and memory consumption; more compatible to old machine
- new fp32 gguf vae decoder; similar size to fp16 safetensors; better quality; less ram requirement
- q2 works but not usable; you could get q8 [here] pig architecture
reference
- Downloads last month
- 414
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for calcuis/mochi-gguf
Base model
genmo/mochi-1-preview
Finetuned
Comfy-Org/mochi_preview_repackaged