gpustack commited on
Commit
793970f
·
verified ·
1 Parent(s): ee74ad7

docs: readme

Browse files
Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -20,6 +20,15 @@ tags:
20
  **Original model**: [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)<br/>
21
  **GGUF quantization**: based on stable-diffusion.cpp [ac54e](https://github.com/leejet/stable-diffusion.cpp/commit/ac54e0076052a196b7df961eb1f792c9ff4d7f22) that patched by llama-box.
22
 
 
 
 
 
 
 
 
 
 
23
  ---
24
 
25
  ![FLUX.1 [dev] Grid](./dev_grid.jpg)
 
20
  **Original model**: [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)<br/>
21
  **GGUF quantization**: based on stable-diffusion.cpp [ac54e](https://github.com/leejet/stable-diffusion.cpp/commit/ac54e0076052a196b7df961eb1f792c9ff4d7f22) that patched by llama-box.
22
 
23
+ | Quantization | OpenAI CLIP ViT-L/14 Quantization | Google T5-xxl Quantization | VAE Quantization |
24
+ | --- | --- | --- | --- |
25
+ | FP16 | FP16 | FP16 | FP16 |
26
+ | Q8_0 | FP16 | Q8_0 | FP16 |
27
+ | (pure) Q8_0 | Q8_0 | Q8_0 | FP16 |
28
+ | Q4_1 | FP16 | Q8_0 | FP16 |
29
+ | Q4_0 | FP16 | Q8_0 | FP16 |
30
+ | (pure) Q4_0 | Q4_0 | Q4_0 | FP16 |
31
+
32
  ---
33
 
34
  ![FLUX.1 [dev] Grid](./dev_grid.jpg)