Wan2.1-T2V-14B Old Book Illustrations LoRA

- Prompt
- An old book illustration of a dog walking down a path
- Negative Prompt
- 色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走

- Prompt
- An old book illustration of a waves continually crashing on a rocky shore, clouds pass overhead
- Negative Prompt
- 色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走

- Prompt
- An old book illustration of a rose growing from the ground into a full flower timelapse
- Negative Prompt
- 色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走
Model Description
Lora adapter for Wan-AI/Wan2.1-T2V-14B text-2-video model trained on a subset of images from AdamLucek/oldbookillustrations-small.
Trigger words
You should use An old book illustration of a
to trigger the image generation.
Using with Diffusers
pip install git+https://github.com/huggingface/diffusers.git
import torch
from diffusers.utils import export_to_video
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.schedulers.scheduling_unipc_multistep import UniPCMultistepScheduler
model_id = "Wan-AI/Wan2.1-T2V-14B-Diffusers"
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
pipe.scheduler = UniPCMultistepScheduler.from_config(
pipe.scheduler.config,
flow_shift=5.0
)
pipe.to("cuda")
pipe.load_lora_weights("AdamLucek/Wan2.1-T2V-14B-OldBookIllustrations")
pipe.enable_model_cpu_offload() # for low-vram environments
prompt = "An old book illustration of a dog walking down a path"
negative_prompt = "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走"
output = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
height=720,
width=1280,
num_frames=81,
guidance_scale=5.0,
num_inference_steps=32
).frames[0]
export_to_video(output, "output.mp4", fps=16)
Using with ComfyUI
Use the provided ComfyUI oldbookillustration_workflow.json.
To quickly download the reccomended text encoder, VAE and Wan2.1 files run:
wget https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors
wget https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors
wget https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_t2v_14B_bf16.safetensors
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
1
Ask for provider support
HF Inference deployability: The HF Inference API does not support text-to-video models for diffusers
library.
Model tree for AdamLucek/Wan2.1-T2V-14B-OldBookIllustrations
Base model
Wan-AI/Wan2.1-T2V-14B