YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

HunyuanVideo t2v lora tuned by

https://huggingface.co/datasets/svjack/Genshin-Impact-XiangLing-animatediff-with-score-organized

In early step

Installation

Prerequisites

Before you begin, ensure you have the following installed:

  • git-lfs
  • cbm
  • ffmpeg

You can install these prerequisites using the following command:

sudo apt-get update && sudo apt-get install git-lfs cbm ffmpeg

Installation Steps

  1. Install comfy-cli:

    pip install comfy-cli
    
  2. Initialize ComfyUI:

    comfy --here install
    
  3. Clone and Install ComfyScript:

    cd ComfyUI/custom_nodes
    git clone https://github.com/Chaoses-Ib/ComfyScript.git
    cd ComfyScript
    pip install -e ".[default,cli]"
    pip uninstall aiohttp
    pip install -U aiohttp
    
  4. Clone and Install ComfyUI-HunyuanVideoWrapper:

    cd ../
    git clone https://github.com/svjack/ComfyUI-HunyuanVideoWrapper
    cd ComfyUI-HunyuanVideoWrapper
    pip install -r requirements.txt
    
  5. Load ComfyScript Runtime:

    from comfy_script.runtime import *
    load()
    from comfy_script.runtime.nodes import *
    
  6. Install Example Dependencies:

    cd examples
    comfy node install-deps --workflow='hunyuanvideo lora Walking Animation Share.json'
    
  7. Update ComfyUI Dependencies:

    cd ../../ComfyUI
    pip install --upgrade torch torchvision torchaudio -r requirements.txt
    
  8. Transpile Example Workflow:

    python -m comfy_script.transpile hyvideo_t2v_example_01.json
    
  9. Download and Place Model Files:

    Download the required model files from Hugging Face:

    huggingface-cli download Kijai/HunyuanVideo_comfy --local-dir ./HunyuanVideo_comfy
    

    Copy the downloaded files to the appropriate directories:

    cp -r HunyuanVideo_comfy/ .
    cp HunyuanVideo_comfy/hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors ComfyUI/models/diffusion_models
    cp HunyuanVideo_comfy/hunyuan_video_vae_bf16.safetensors ComfyUI/models/vae
    

Genshin Impact Character XiangLing LoRA Example (early tuned version)

  1. Download the Makima LoRA Model:

Download the Makima LoRA model from Huggingface:

xiangling_test_epoch4.safetensors

Copy the model to the loras directory:

cp xiangling_test_epoch4.safetensors ComfyUI/models/loras
  1. Run the Workflow:

Create a Python script run_t2v_xiangling_lora.py:

#### character do something (seed 42)
from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *
with Workflow():
    vae = HyVideoVAELoader(r'hunyuan_video_vae_bf16.safetensors', 'bf16', None)
    lora = HyVideoLoraSelect('xiangling_test_epoch4.safetensors', 2.0, None, None)
    model = HyVideoModelLoader(r'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors', 'bf16', 'fp8_e4m3fn', 'offload_device', 'sdpa', None, None, lora)
    hyvid_text_encoder = DownloadAndLoadHyVideoTextEncoder('Kijai/llava-llama-3-8b-text-encoder-tokenizer', 'openai/clip-vit-large-patch14', 'fp16', False, 2, 'disabled')
    hyvid_embeds = HyVideoTextEncode(hyvid_text_encoder, "solo,Xiangling, cook rice in a pot genshin impact ,1girl,highres,", 'bad quality video', 'video', None, None, None)
    samples = HyVideoSampler(model, hyvid_embeds, 478, 512, 85, 30, 6, 9, 42, 1, None, 1, None)
    images = HyVideoDecode(vae, samples, True, 64, 256, True)
    #_ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None)
    _ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None,
                        pix_fmt = 'yuv420p', crf=19, save_metadata = True, trim_to_audio = False)

Run the script:

python run_t2v_xiangling_lora.py

  • prompt = "solo,Xiangling, cook rice in a pot genshin impact ,1girl,highres,"


#### cook rice
from comfy_script.runtime import *
load()
from comfy_script.runtime.nodes import *
with Workflow():
   vae = HyVideoVAELoader(r'hunyuan_video_vae_bf16.safetensors', 'bf16', None)
   lora = HyVideoLoraSelect('xiangling_test_epoch3.safetensors', 2.0, None, None)
   model = HyVideoModelLoader(r'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors', 'bf16', 'fp8_e4m3fn', 'offload_device', 'sdpa', None, None, lora)
   hyvid_text_encoder = DownloadAndLoadHyVideoTextEncoder('Kijai/llava-llama-3-8b-text-encoder-tokenizer', 'openai/clip-vit-large-patch14', 'fp16', False, 2, 'disabled')
   hyvid_embeds = HyVideoTextEncode(hyvid_text_encoder, "solo,Xiangling, cook rice in a pot, (genshin impact) ,1girl,highres, dynamic", 'bad quality video', 'video', None, None, None)
   samples = HyVideoSampler(model, hyvid_embeds, 478, 512, 49, 25, 8, 9, 42, 1, None, 1, None)
   images = HyVideoDecode(vae, samples, True, 64, 256, True)
   #_ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None)
   _ = VHSVideoCombine(images, 24, 0, 'HunyuanVideo', 'video/h264-mp4', False, True, None, None, None,
                       pix_fmt = 'yuv420p', crf=19, save_metadata = True, trim_to_audio = False)

Run the script:

python run_t2v_xiangling_lora.py

  • prompt = "solo,Xiangling, cook rice in a pot, (genshin impact) ,1girl,highres, dynamic"


Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .