denk commited on
Commit
613307b
·
1 Parent(s): f472fa8
Files changed (3) hide show
  1. README.md +92 -0
  2. config.json +25 -0
  3. diffusion_pytorch_model.safetensors +3 -0
README.md CHANGED
@@ -1,3 +1,95 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - video
7
+ - video-generation
8
+ - video-to-video
9
+ - controlnet
10
+ - diffusers
11
  ---
12
+ # Dilated Controlnet for Wan2.1
13
+
14
+ This repo contains the code for dilated controlnet module for Wan2.1 model.
15
+ Dilated controlnet has less basic blocks and also has `stride` parameter. For Wan1.3B model controlnet blocks count = 6 and stride = 4.
16
+ See <a href="https://github.com/TheDenk/wan2.1-dilated-controlnet">Github code</a>.
17
+
18
+ ### How to
19
+ Clone repo
20
+ ```bash
21
+ git clone https://github.com/TheDenk/wan2.1-dilated-controlnet.git
22
+ cd wan2.1-dilated-controlnet
23
+ ```
24
+
25
+ Create venv
26
+ ```bash
27
+ python -m venv venv
28
+ source venv/bin/activate
29
+ ```
30
+
31
+ Install requirements
32
+ ```bash
33
+ pip install -r requirements.txt
34
+ ```
35
+
36
+ ### Inference examples
37
+ #### Inference with cli
38
+ ```bash
39
+ python -m inference.cli_demo \
40
+ --video_path "resources/physical-4.mp4" \
41
+ --prompt "A balloon filled with water was thrown to the ground, exploding and splashing water in all directions. There were graffiti on the wall, studio lighting, and commercial movie shooting." \
42
+ --controlnet_type "hed" \
43
+ --controlnet_stride 4 \
44
+ --base_model_path Wan-AI/Wan2.1-T2V-14B-Diffusers \
45
+ --controlnet_model_path TheDenk/wan2.1-t2v-14b-controlnet-hed-v1
46
+ ```
47
+
48
+ #### Inference with Gradio
49
+ ```bash
50
+ python -m inference.gradio_web_demo \
51
+ --controlnet_type "hed" \
52
+ --base_model_path Wan-AI/Wan2.1-T2V-14B-Diffusers \
53
+ --controlnet_model_path TheDenk/wan2.1-t2v-14b-controlnet-hed-v1
54
+ ```
55
+ #### Detailed Inference
56
+ ```bash
57
+ python -m inference.cli_demo \
58
+ --video_path "resources/physical-4.mp4" \
59
+ --prompt "A balloon filled with water was thrown to the ground, exploding and splashing water in all directions. There were graffiti on the wall, studio lighting, and commercial movie shooting." \
60
+ --controlnet_type "hed" \
61
+ --base_model_path Wan-AI/Wan2.1-T2V-14B-Diffusers \
62
+ --controlnet_model_path TheDenk/wan2.1-t2v-14b-controlnet-hed-v1 \
63
+ --controlnet_weight 0.8 \
64
+ --controlnet_guidance_start 0.0 \
65
+ --controlnet_guidance_end 0.8 \
66
+ --controlnet_stride 4 \
67
+ --num_inference_steps 50 \
68
+ --guidance_scale 5.0 \
69
+ --video_height 480 \
70
+ --video_width 832 \
71
+ --num_frames 81 \
72
+ --negative_prompt "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards" \
73
+ --seed 42 \
74
+ --out_fps 16 \
75
+ --output_path "result.mp4"
76
+ ```
77
+
78
+
79
+ ## Acknowledgements
80
+ Original code and models [Wan2.1](https://github.com/Wan-Video/Wan2.1).
81
+
82
+
83
+ ## Citations
84
+ ```
85
+ @misc{TheDenk,
86
+ title={Dilated Controlnet},
87
+ author={Karachev Denis},
88
+ url={https://github.com/TheDenk/wan2.1-dilated-controlnet},
89
+ publisher={Github},
90
+ year={2025}
91
+ }
92
+ ```
93
+
94
+ ## Contacts
95
+ <p>Issues should be raised directly in the repository. For professional support and recommendations please <a>[email protected]</a>.</p>
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "WanControlnet",
3
+ "_diffusers_version": "0.33.0.dev0",
4
+ "added_kv_proj_dim": null,
5
+ "attention_head_dim": 128,
6
+ "cross_attn_norm": true,
7
+ "downscale_coef": 8,
8
+ "eps": 1e-06,
9
+ "ffn_dim": 8960,
10
+ "freq_dim": 256,
11
+ "image_dim": null,
12
+ "in_channels": 3,
13
+ "num_attention_heads": 12,
14
+ "num_layers": 6,
15
+ "out_proj_dim": 5120,
16
+ "patch_size": [
17
+ 1,
18
+ 2,
19
+ 2
20
+ ],
21
+ "qk_norm": "rms_norm_across_heads",
22
+ "rope_max_seq_len": 1024,
23
+ "text_dim": 4096,
24
+ "vae_channels": 16
25
+ }
diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea63480c7eab85a5b01d05a394d467e7698b94a15942dcb55d6a1a0a38bf4ae8
3
+ size 705206016