denk commited on
Commit
39b12b7
·
1 Parent(s): 525ee5d
Files changed (3) hide show
  1. README.md +96 -0
  2. config.json +25 -0
  3. diffusion_pytorch_model.safetensors +3 -0
README.md CHANGED
@@ -1,3 +1,99 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - video
7
+ - video-generation
8
+ - video-to-video
9
+ - controlnet
10
+ - diffusers
11
+ pipeline_tag: video-to-video
12
  ---
13
+ # Dilated Controlnet for Wan2.1 (canny)
14
+
15
+
16
+ This repo contains the code for dilated controlnet module for Wan2.1 model.
17
+ Dilated controlnet has less basic blocks and also has `stride` parameter. For Wan1.3B model controlnet blocks count = 8 and stride = 3.
18
+ See <a href="https://github.com/TheDenk/wan2.1-dilated-controlnet">Github code</a>.
19
+ General scheme
20
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63fde49f6315a264aba6a7ed/XPa3l2dm-BhuqyAH_Yk63.png)
21
+
22
+ ### How to
23
+ Clone repo
24
+ ```bash
25
+ git clone https://github.com/TheDenk/wan2.1-dilated-controlnet.git
26
+ cd wan2.1-dilated-controlnet
27
+ ```
28
+
29
+ Create venv
30
+ ```bash
31
+ python -m venv venv
32
+ source venv/bin/activate
33
+ ```
34
+
35
+ Install requirements
36
+ ```bash
37
+ pip install -r requirements.txt
38
+ ```
39
+
40
+ ### Inference examples
41
+ #### Inference with cli
42
+ ```bash
43
+ python -m inference.cli_demo \
44
+ --video_path "resources/physical-4.mp4" \
45
+ --prompt "A balloon filled with water was thrown to the ground, exploding and splashing water in all directions. There were graffiti on the wall, studio lighting, and commercial movie shooting." \
46
+ --controlnet_type "canny" \
47
+ --controlnet_stride 3 \
48
+ --base_model_path Wan-AI/Wan2.1-T2V-1.3B-Diffusers \
49
+ --controlnet_model_path TheDenk/wan2.1-t2v-1.3b-controlnet-canny-v1
50
+ ```
51
+
52
+ #### Inference with Gradio
53
+ ```bash
54
+ python -m inference.gradio_web_demo \
55
+ --controlnet_type "canny" \
56
+ --base_model_path Wan-AI/Wan2.1-T2V-1.3B-Diffusers \
57
+ --controlnet_model_path TheDenk/wan2.1-t2v-1.3b-controlnet-canny-v1
58
+ ```
59
+ #### Detailed Inference
60
+ ```bash
61
+ python -m inference.cli_demo \
62
+ --video_path "resources/physical-4.mp4" \
63
+ --prompt "A balloon filled with water was thrown to the ground, exploding and splashing water in all directions. There were graffiti on the wall, studio lighting, and commercial movie shooting." \
64
+ --controlnet_type "canny" \
65
+ --base_model_path Wan-AI/Wan2.1-T2V-1.3B-Diffusers \
66
+ --controlnet_model_path TheDenk/wan2.1-t2v-1.3b-controlnet-canny-v1 \
67
+ --controlnet_weight 0.8 \
68
+ --controlnet_guidance_start 0.0 \
69
+ --controlnet_guidance_end 0.8 \
70
+ --controlnet_stride 3 \
71
+ --num_inference_steps 50 \
72
+ --guidance_scale 5.0 \
73
+ --video_height 480 \
74
+ --video_width 832 \
75
+ --num_frames 81 \
76
+ --negative_prompt "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards" \
77
+ --seed 42 \
78
+ --out_fps 16 \
79
+ --output_path "result.mp4"
80
+ ```
81
+
82
+
83
+ ## Acknowledgements
84
+ Original code and models [Wan2.1](https://github.com/Wan-Video/Wan2.1).
85
+
86
+
87
+ ## Citations
88
+ ```
89
+ @misc{TheDenk,
90
+ title={Dilated Controlnet},
91
+ author={Karachev Denis},
92
+ url={https://github.com/TheDenk/wan2.1-dilated-controlnet},
93
+ publisher={Github},
94
+ year={2025}
95
+ }
96
+ ```
97
+
98
+ ## Contacts
99
+ <p>Issues should be raised directly in the repository. For professional support and recommendations please <a>[email protected]</a>.</p>
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "WanControlnet",
3
+ "_diffusers_version": "0.33.0.dev0",
4
+ "added_kv_proj_dim": null,
5
+ "attention_head_dim": 128,
6
+ "cross_attn_norm": true,
7
+ "downscale_coef": 8,
8
+ "eps": 1e-06,
9
+ "ffn_dim": 8960,
10
+ "freq_dim": 256,
11
+ "image_dim": null,
12
+ "in_channels": 3,
13
+ "num_attention_heads": 12,
14
+ "num_layers": 8,
15
+ "out_proj_dim": 1536,
16
+ "patch_size": [
17
+ 1,
18
+ 2,
19
+ 2
20
+ ],
21
+ "qk_norm": "rms_norm_across_heads",
22
+ "rope_max_seq_len": 1024,
23
+ "text_dim": 4096,
24
+ "vae_channels": 16
25
+ }
diffusion_pytorch_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed549333f08dae6f88447eda73dc18f57706a96807ef7881944a8d1db7e169b6
3
+ size 834314664