Diffusers
Safetensors
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning

*Equal Contribution.
Terminal Technology Department, Alipay, Ant Group.

πŸš€ EchoMimic Series

  • EchoMimicV1: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning. GitHub
  • EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation. GitHub

πŸ“£ Updates

  • [2024.12.10] πŸ”₯ EchoMimic is accepted by AAAI 2025.
  • [2024.11.21] πŸ”₯πŸ”₯πŸ”₯ We release our EchoMimicV2 codes and models.
  • [2024.08.02] πŸ”₯ EchoMimic is now available on huggingface with A100 GPU. Thanks Wenmeng Zhou@ModelScope.
  • [2024.07.25] πŸ”₯πŸ”₯πŸ”₯ Accelerated models and pipe on Audio Driven are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
  • [2024.07.23] πŸ”₯ EchoMimic gradio demo on modelscope is ready.
  • [2024.07.23] πŸ”₯ EchoMimic gradio demo on huggingface is ready. Thanks Sylvain Filoni@fffiloni.
  • [2024.07.17] πŸ”₯πŸ”₯πŸ”₯ Accelerated models and pipe on Audio + Selected Landmarks are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
  • [2024.07.14] πŸ”₯ ComfyUI is now available. Thanks @smthemex for the contribution.
  • [2024.07.13] πŸ”₯ Thanks NewGenAI for the video installation tutorial.
  • [2024.07.13] πŸ”₯ We release our pose&audio driven codes and models.
  • [2024.07.12] πŸ”₯ WebUI and GradioUI versions are released. We thank @greengerong @Robin021 and @O-O1024 for their contributions.
  • [2024.07.12] πŸ”₯ Our paper is in public on arxiv.
  • [2024.07.09] πŸ”₯ We release our audio driven codes and models.

πŸŒ… Gallery

Audio Driven (Sing)

Audio Driven (English)

Audio Driven (Chinese)

Landmark Driven

Audio + Selected Landmark Driven

(Some demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.οΌ‰

βš’οΈ Installation

Download the Codes

  git clone https://github.com/BadToBest/EchoMimic
  cd EchoMimic

Python Environment Setup

  • Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
  • Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
  • Tested Python Version: 3.8 / 3.10 / 3.11

Create conda environment (Recommended):

  conda create -n echomimic python=3.8
  conda activate echomimic

Install packages with pip

  pip install -r requirements.txt

Download ffmpeg-static

Download and decompress ffmpeg-static, then

export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static

Download pretrained weights

git lfs install
git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights

The pretrained_weights is organized as follows.

./pretrained_weights/
β”œβ”€β”€ denoising_unet.pth
β”œβ”€β”€ reference_unet.pth
β”œβ”€β”€ motion_module.pth
β”œβ”€β”€ face_locator.pth
β”œβ”€β”€ sd-vae-ft-mse
β”‚   └── ...
β”œβ”€β”€ sd-image-variations-diffusers
β”‚   └── ...
└── audio_processor
    └── whisper_tiny.pt

In which denoising_unet.pth / reference_unet.pth / motion_module.pth / face_locator.pth are the main checkpoints of EchoMimic. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:

Audio-Drived Algo Inference

Run the python inference script:

  python -u infer_audio2vid.py
  python -u infer_audio2vid_pose.py

Audio-Drived Algo Inference On Your Own Cases

Edit the inference config file ./configs/prompts/animation.yaml, and add your own case:

test_cases:
  "path/to/your/image":
    - "path/to/your/audio"

The run the python inference script:

  python -u infer_audio2vid.py

Motion Alignment between Ref. Img. and Driven Vid.

(Firstly download the checkpoints with '_pose.pth' postfix from huggingface)

Edit driver_video and ref_image to your path in demo_motion_sync.py, then run

  python -u demo_motion_sync.py

Audio&Pose-Drived Algo Inference

Edit ./configs/prompts/animation_pose.yaml, then run

  python -u infer_audio2vid_pose.py

Pose-Drived Algo Inference

Set draw_mouse=True in line 135 of infer_audio2vid_pose.py. Edit ./configs/prompts/animation_pose.yaml, then run

  python -u infer_audio2vid_pose.py

Run the Gradio UI

Thanks to the contribution from @Robin021:


python -u webgui.py --server_port=3000

πŸ“ Release Plans

Status Milestone ETA
βœ… The inference source code of the Audio-Driven algo meet everyone on GitHub 9th July, 2024
βœ… Pretrained models trained on English and Mandarin Chinese to be released 9th July, 2024
βœ… The inference source code of the Pose-Driven algo meet everyone on GitHub 13th July, 2024
βœ… Pretrained models with better pose control to be released 13th July, 2024
βœ… Accelerated models to be released 17th July, 2024
πŸš€ Pretrained models with better sing performance to be released TBD
πŸš€ Large-Scale and High-resolution Chinese-Based Talking Head Dataset TBD

βš–οΈ Disclaimer

This project is intended for academic research, and we explicitly disclaim any responsibility for user-generated content. Users are solely liable for their actions while using the generative model. The project contributors have no legal affiliation with, nor accountability for, users' behaviors. It is imperative to use the generative model responsibly, adhering to both ethical and legal standards.

πŸ™πŸ» Acknowledgements

We would like to thank the contributors to the AnimateDiff, Moore-AnimateAnyone and MuseTalk repositories, for their open research and exploration.

We are also grateful to V-Express and hallo for their outstanding work in the area of diffusion-based talking heads.

If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.

πŸ“’ Citation

If you find our work useful for your research, please consider citing the paper :

@misc{chen2024echomimic,
  title={EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning},
  author={Zhiyuan Chen, Jiajiong Cao, Zhiquan Chen, Yuming Li, Chenguang Ma},
  year={2024},
  eprint={2407.08136},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}

🌟 Star History

Star History Chart

Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Spaces using BadToBest/EchoMimic 8