lym00 commited on
Commit
ed59e6d
·
verified ·
1 Parent(s): 9c0262f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  base_model:
3
- - QuantStack/Wan2.1-14B-T2V-FusionX-VACE
4
  base_model_relation: quantized
5
  library_name: gguf
6
  quantized_by: lym00
@@ -14,7 +14,7 @@ language:
14
  license: apache-2.0
15
  ---
16
 
17
- This is a GGUF conversion of [QuantStack/Wan2.1-14B-T2V-FusionX-VACE](https://huggingface.co/QuantStack/Wan2.1-14B-T2V-FusionX-VACE).
18
 
19
  All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF/tree/main/tools) GitHub repository.
20
 
@@ -24,7 +24,7 @@ The model files can be used in [ComfyUI](https://github.com/comfyanonymous/Comfy
24
 
25
  | Type | Name | Location | Download |
26
  | ------------ | -------------------------------- | ------------------------------ | ---------------- |
27
- | Main Model | Wan2.1-14B-T2V-FusionX-VACE-GGUF | `ComfyUI/models/unet` | GGUF (this repo) |
28
  | Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
29
  | VAE | Wan2_1_VAE_bf16 | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) |
30
 
@@ -36,5 +36,4 @@ The model files can be used in [ComfyUI](https://github.com/comfyanonymous/Comfy
36
 
37
  ## Reference
38
 
39
- - For more information about the source model, refer to [QuantStack/Wan2.1-14B-T2V-FusionX-VACE](https://huggingface.co/QuantStack/Wan2.1-14B-T2V-FusionX-VACE), where the model creation process is explained.
40
  - For an overview of quantization types, please see the [GGUF quantization types](https://huggingface.co/docs/hub/gguf#quantization-types).
 
1
  ---
2
  base_model:
3
+ - QuantStack/Wan2.1_T2V_14B_FusionX_VACE
4
  base_model_relation: quantized
5
  library_name: gguf
6
  quantized_by: lym00
 
14
  license: apache-2.0
15
  ---
16
 
17
+ This is a GGUF conversion of [QuantStack/Wan2.1_T2V_14B_FusionX_VACE](https://huggingface.co/QuantStack/Wan2.1_T2V_14B_FusionX_VACE).
18
 
19
  All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF/tree/main/tools) GitHub repository.
20
 
 
24
 
25
  | Type | Name | Location | Download |
26
  | ------------ | -------------------------------- | ------------------------------ | ---------------- |
27
+ | Main Model | Wan2.1_T2V_14B_FusionX_VACE-GGUF | `ComfyUI/models/unet` | GGUF (this repo) |
28
  | Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
29
  | VAE | Wan2_1_VAE_bf16 | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) |
30
 
 
36
 
37
  ## Reference
38
 
 
39
  - For an overview of quantization types, please see the [GGUF quantization types](https://huggingface.co/docs/hub/gguf#quantization-types).