apepkuss79 commited on
Commit
d56a8b0
·
verified ·
1 Parent(s): c4b6599

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -89
README.md CHANGED
@@ -1,90 +1,95 @@
1
- ---
2
- base_model: black-forest-labs/FLUX.1-Canny-dev
3
- license: other
4
- license_name: flux-1-dev-non-commercial-license
5
- license_link: LICENSE.md
6
- model_creator: black-forest-labs
7
- model_name: FLUX.1-Canny-dev
8
- quantized_by: Second State Inc.
9
- language:
10
- - en
11
- tags:
12
- - text-to-image
13
- - image-generation
14
- - flux
15
- ---
16
-
17
- <!-- header start -->
18
- <!-- 200823 -->
19
- <div style="width: auto; margin-left: auto; margin-right: auto">
20
- <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
21
- </div>
22
- <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
23
- <!-- header end -->
24
-
25
- # FLUX.1-Canny-dev-GGUF
26
-
27
- ## Original Model
28
-
29
- [black-forest-labs/FLUX.1-Canny-dev](https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev)
30
-
31
- ## Run with `sd-api-server`
32
-
33
- - sd-api-server version: [0.1.4](https://github.com/LlamaEdge/sd-api-server/releases/tag/0.1.4)
34
-
35
- - Run as LlamaEdge service
36
-
37
- ```bash
38
- wasmedge --dir .:. sd-api-server.wasm \
39
- --model-name flux1-canny-dev \
40
- --diffusion-model flux1-canny-dev-Q4_0.gguf \
41
- --vae ae.safetensors \
42
- --clip-l clip_l.safetensors \
43
- --t5xxl t5xxl-Q8_0.gguf
44
- ```
45
-
46
- - Run with LoRA
47
-
48
- Assume that the LoRA model is located in the `lora-models` directory
49
-
50
- ```bash
51
- wasmedge --dir .:. \
52
- --dir lora-models:lora-models \
53
- sd-api-server.wasm \
54
- --model-name flux1-canny-dev \
55
- --diffusion-model flux1-canny-dev-Q4_0.gguf \
56
- --vae ae.safetensors \
57
- --clip-l clip_l.safetensors \
58
- --t5xxl t5xxl-Q8_0.gguf \
59
- --lora-model-dir lora-models
60
- ```
61
-
62
- *For details, see https://github.com/LlamaEdge/sd-api-server/blob/main/examples/flux_with_lora.md*
63
-
64
- ## Quantized GGUF Models
65
-
66
- | Name | Quant method | Bits | Size | Use case |
67
- | ---- | ---- | ---- | ---- | ----- |
68
- | [ae.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/ae.safetensors) | f32 | 32 | 335 MB | |
69
- <!-- | [clip_l-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/clip_l-Q8_0.gguf) | Q8_0 | 8 | 131 MB | |
70
- | [clip_l.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/clip_l.safetensors) | f16 | 16 | 246 MB | | -->
71
- | [flux1-canny-dev-Q2_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q2_K.gguf) | Q2_K | 2 | 4.15 GB | |
72
- | [flux1-canny-dev-Q3_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q3_K.gguf) | Q3_K | 3 | 5.35 GB | |
73
- | [flux1-canny-dev-Q4_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q4_0.gguf) | Q4_0 | 4 | 6.93 GB | |
74
- | [flux1-canny-dev-Q4_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q4_1.gguf) | Q4_1 | 4 | 7.67 GB | |
75
- | [flux1-canny-dev-Q4_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q4_K.gguf) | Q4_K | 4 | 6.93 GB | |
76
- | [flux1-canny-dev-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q5_0.gguf) | Q5_0 | 5 | 8.40 GB | |
77
- | [flux1-canny-dev-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q5_1.gguf) | Q5_1 | 5 | 9.14 GB | |
78
- | [flux1-canny-dev-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q8_0.gguf) | Q8_0 | 8 | 12.6 GB | |
79
- | [flux1-canny-dev.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev.safetensors) | f16 | 16 | 23.8 GB | |
80
- <!-- | [t5xxl-Q2_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q2_K.gguf) | Q2_K | 2 | 1.61 GB | |
81
- | [t5xxl-Q3_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q3_K.gguf) | Q3_K | 3 | 2.10 GB | |
82
- | [t5xxl-Q4_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q4_0.gguf) | Q4_0 | 4 | 2.75 GB | |
83
- | [t5xxl-Q4_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q4_1.gguf) | Q4_0 | 4 | 3.06 GB | |
84
- | [t5xxl-Q4_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q4_K.gguf) | Q4_K | 4 | 2.75 GB | |
85
- | [t5xxl-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q5_0.gguf) | Q5_0 | 5 | 3.36 GB | |
86
- | [t5xxl-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q5_1.gguf) | Q5_1 | 5 | 3.67 GB | |
87
- | [t5xxl-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q8_0.gguf) | Q8_0 | 8 | 5.20 GB | |
88
- | [t5xxl_fp16.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl_fp16.safetensors) | f16 | 16 | 9.79 GB | | -->
89
-
 
 
 
 
 
90
  **Quantized with stable-diffusion.cpp `master-c3eeb669`.**
 
1
+ ---
2
+ base_model: black-forest-labs/FLUX.1-Canny-dev
3
+ license: other
4
+ license_name: flux-1-dev-non-commercial-license
5
+ license_link: LICENSE.md
6
+ model_creator: black-forest-labs
7
+ model_name: FLUX.1-Canny-dev
8
+ quantized_by: Second State Inc.
9
+ language:
10
+ - en
11
+ tags:
12
+ - text-to-image
13
+ - image-generation
14
+ - flux
15
+ ---
16
+
17
+ <!-- header start -->
18
+ <!-- 200823 -->
19
+ <div style="width: auto; margin-left: auto; margin-right: auto">
20
+ <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
21
+ </div>
22
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
23
+ <!-- header end -->
24
+
25
+ > [!CAUTION]
26
+ > T5 and Clip are still not provided in the original model
27
+
28
+ # FLUX.1-Canny-dev-GGUF
29
+
30
+ ## Original Model
31
+
32
+ [black-forest-labs/FLUX.1-Canny-dev](https://huggingface.co/black-forest-labs/FLUX.1-Canny-dev)
33
+
34
+ ## Run with LlamaEdge-StableDiffusion
35
+
36
+ - Version: coming soon
37
+
38
+ <!-- - Version: [0.1.4](https://github.com/LlamaEdge/sd-api-server/releases/tag/0.1.4)
39
+
40
+ - Run as LlamaEdge service
41
+
42
+ ```bash
43
+ wasmedge --dir .:. sd-api-server.wasm \
44
+ --model-name flux1-canny-dev \
45
+ --diffusion-model flux1-canny-dev-Q4_0.gguf \
46
+ --vae ae.safetensors \
47
+ --clip-l clip_l.safetensors \
48
+ --t5xxl t5xxl-Q8_0.gguf
49
+ ```
50
+
51
+ - Run with LoRA
52
+
53
+ Assume that the LoRA model is located in the `lora-models` directory
54
+
55
+ ```bash
56
+ wasmedge --dir .:. \
57
+ --dir lora-models:lora-models \
58
+ sd-api-server.wasm \
59
+ --model-name flux1-canny-dev \
60
+ --diffusion-model flux1-canny-dev-Q4_0.gguf \
61
+ --vae ae.safetensors \
62
+ --clip-l clip_l.safetensors \
63
+ --t5xxl t5xxl-Q8_0.gguf \
64
+ --lora-model-dir lora-models
65
+ ```
66
+
67
+ *For details, see https://github.com/LlamaEdge/sd-api-server/blob/main/examples/flux_with_lora.md* -->
68
+
69
+ ## Quantized GGUF Models
70
+
71
+ | Name | Quant method | Bits | Size | Use case |
72
+ | ---- | ---- | ---- | ---- | ----- |
73
+ | [ae.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/ae.safetensors) | f32 | 32 | 335 MB | |
74
+ <!-- | [clip_l-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/clip_l-Q8_0.gguf) | Q8_0 | 8 | 131 MB | |
75
+ | [clip_l.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/clip_l.safetensors) | f16 | 16 | 246 MB | | -->
76
+ | [flux1-canny-dev-Q2_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q2_K.gguf) | Q2_K | 2 | 4.15 GB | |
77
+ | [flux1-canny-dev-Q3_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q3_K.gguf) | Q3_K | 3 | 5.35 GB | |
78
+ | [flux1-canny-dev-Q4_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q4_0.gguf) | Q4_0 | 4 | 6.93 GB | |
79
+ | [flux1-canny-dev-Q4_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q4_1.gguf) | Q4_1 | 4 | 7.67 GB | |
80
+ | [flux1-canny-dev-Q4_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q4_K.gguf) | Q4_K | 4 | 6.93 GB | |
81
+ | [flux1-canny-dev-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q5_0.gguf) | Q5_0 | 5 | 8.40 GB | |
82
+ | [flux1-canny-dev-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q5_1.gguf) | Q5_1 | 5 | 9.14 GB | |
83
+ | [flux1-canny-dev-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev-Q8_0.gguf) | Q8_0 | 8 | 12.6 GB | |
84
+ | [flux1-canny-dev.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/flux1-canny-dev.safetensors) | f16 | 16 | 23.8 GB | |
85
+ <!-- | [t5xxl-Q2_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q2_K.gguf) | Q2_K | 2 | 1.61 GB | |
86
+ | [t5xxl-Q3_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q3_K.gguf) | Q3_K | 3 | 2.10 GB | |
87
+ | [t5xxl-Q4_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q4_0.gguf) | Q4_0 | 4 | 2.75 GB | |
88
+ | [t5xxl-Q4_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q4_1.gguf) | Q4_0 | 4 | 3.06 GB | |
89
+ | [t5xxl-Q4_K.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q4_K.gguf) | Q4_K | 4 | 2.75 GB | |
90
+ | [t5xxl-Q5_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q5_0.gguf) | Q5_0 | 5 | 3.36 GB | |
91
+ | [t5xxl-Q5_1.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q5_1.gguf) | Q5_1 | 5 | 3.67 GB | |
92
+ | [t5xxl-Q8_0.gguf](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl-Q8_0.gguf) | Q8_0 | 8 | 5.20 GB | |
93
+ | [t5xxl_fp16.safetensors](https://huggingface.co/second-state/FLUX.1-Canny-dev-GGUF/blob/main/t5xxl_fp16.safetensors) | f16 | 16 | 9.79 GB | | -->
94
+
95
  **Quantized with stable-diffusion.cpp `master-c3eeb669`.**