denk commited on
Commit
5eaa7d4
·
1 Parent(s): f05b079
Files changed (3) hide show
  1. README.md +93 -0
  2. config.json +25 -0
  3. diffusion_pytorch_model.safetensors +3 -0
README.md CHANGED
@@ -1,3 +1,96 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - video
7
+ - video-generation
8
+ - video-to-video
9
+ - controlnet
10
+ - diffusers
11
  ---
12
+ # Dilated Controlnet for Wan2.1 (canny)
13
+
14
+
15
+ This repo contains the code for dilated controlnet module for Wan2.1 model.
16
+ Dilated controlnet has less basic blocks and also has `stride` parameter. For Wan14B model controlnet blocks count = 6 and stride = 4.
17
+ See <a href="https://github.com/TheDenk/wan2.1-dilated-controlnet">Github code</a>.
18
+
19
+ ### How to
20
+ Clone repo
21
+ ```bash
22
+ git clone https://github.com/TheDenk/wan2.1-dilated-controlnet.git
23
+ cd wan2.1-dilated-controlnet
24
+ ```
25
+
26
+ Create venv
27
+ ```bash
28
+ python -m venv venv
29
+ source venv/bin/activate
30
+ ```
31
+
32
+ Install requirements
33
+ ```bash
34
+ pip install -r requirements.txt
35
+ ```
36
+
37
+ ### Inference examples
38
+ #### Inference with cli
39
+ ```bash
40
+ python -m inference.cli_demo \
41
+ --video_path "resources/physical-4.mp4" \
42
+ --prompt "A balloon filled with water was thrown to the ground, exploding and splashing water in all directions. There were graffiti on the wall, studio lighting, and commercial movie shooting." \
43
+ --controlnet_type "canny" \
44
+ --controlnet_stride 4 \
45
+ --base_model_path Wan-AI/Wan2.1-T2V-14B-Diffusers \
46
+ --controlnet_model_path TheDenk/wan2.1-t2v-14b-controlnet-canny-v1
47
+ ```
48
+
49
+ #### Inference with Gradio
50
+ ```bash
51
+ python -m inference.gradio_web_demo \
52
+ --controlnet_type "canny" \
53
+ --base_model_path Wan-AI/Wan2.1-T2V-14B-Diffusers \
54
+ --controlnet_model_path TheDenk/wan2.1-t2v-14b-controlnet-canny-v1
55
+ ```
56
+ #### Detailed Inference
57
+ ```bash
58
+ python -m inference.cli_demo \
59
+ --video_path "resources/physical-4.mp4" \
60
+ --prompt "A balloon filled with water was thrown to the ground, exploding and splashing water in all directions. There were graffiti on the wall, studio lighting, and commercial movie shooting." \
61
+ --controlnet_type "canny" \
62
+ --base_model_path Wan-AI/Wan2.1-T2V-14B-Diffusers \
63
+ --controlnet_model_path TheDenk/wan2.1-t2v-14b-controlnet-canny-v1 \
64
+ --controlnet_weight 0.8 \
65
+ --controlnet_guidance_start 0.0 \
66
+ --controlnet_guidance_end 0.8 \
67
+ --controlnet_stride 4 \
68
+ --num_inference_steps 50 \
69
+ --guidance_scale 5.0 \
70
+ --video_height 480 \
71
+ --video_width 832 \
72
+ --num_frames 81 \
73
+ --negative_prompt "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards" \
74
+ --seed 42 \
75
+ --out_fps 16 \
76
+ --output_path "result.mp4"
77
+ ```
78
+
79
+
80
+ ## Acknowledgements
81
+ Original code and models [Wan2.1](https://github.com/Wan-Video/Wan2.1).
82
+
83
+
84
+ ## Citations
85
+ ```
86
+ @misc{TheDenk,
87
+ title={Dilated Controlnet},
88
+ author={Karachev Denis},
89
+ url={https://github.com/TheDenk/wan2.1-dilated-controlnet},
90
+ publisher={Github},
91
+ year={2025}
92
+ }
93
+ ```
94
+
95
+ ## Contacts
96
+ <p>Issues should be raised directly in the repository. For professional support and recommendations please <a>[email protected]</a>.</p>
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "WanControlnet",
3
+ "_diffusers_version": "0.33.0.dev0",
4
+ "added_kv_proj_dim": null,
5
+ "attention_head_dim": 128,
6
+ "cross_attn_norm": true,
7
+ "downscale_coef": 8,
8
+ "eps": 1e-06,
9
+ "ffn_dim": 8960,
10
+ "freq_dim": 256,
11
+ "image_dim": null,
12
+ "in_channels": 3,
13
+ "num_attention_heads": 12,
14
+ "num_layers": 6,
15
+ "out_proj_dim": 5120,
16
+ "patch_size": [
17
+ 1,
18
+ 2,
19
+ 2
20
+ ],
21
+ "qk_norm": "rms_norm_across_heads",
22
+ "rope_max_seq_len": 1024,
23
+ "text_dim": 4096,
24
+ "vae_channels": 16
25
+ }
diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d4af935d5b3ee7cc167da354aa809e85edb8a26a4b8616eb32adeac9733d487
3
+ size 705206016