Edit model card

Stable Video Diffusion 1.1 TensorRT

row01

This repository hosts the TensorRT version of the Stable Video Diffusion (SVD) 1.1 Image-to-Video model.

Model Details

Please see Stable Video Diffusion (SVD) 1.1 Image-to-Video for the full model details.

This model is intended for research purposes only and should not be used in any way that violates Stability AI's Acceptable Use Policy.

Performance

SVD-XT 1.1 (25 frames, 25 steps)

A100 80GB PCI A100 80GB SXM H100 80GB PCI
VAE Encoder 66.70 ms 65.68 ms 49.07 ms
CLIP 105.41 ms 53.20 ms 91.32 ms
UNet x 25 30,367.73 ms 27,489.88 ms 19,102.98 ms
VAE Decoder 4,663.63 ms 4,544.12 ms 3,382.62 ms
Total E2E 35,258.38 ms 32,166.41 ms 22,644.73 ms

Usage Example

  1. Clone TensorRT and this repo then launch NGC container
git clone https://github.com/rajeevsrao/TensorRT.git
cd TensorRT
git checkout release/svd

git lfs install 
git clone https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1-tensorrt

docker run --rm -it --gpus all -v $PWD:/workspace nvcr.io/nvidia/pytorch:23.12-py3 /bin/bash
  1. Install libraries and requirements
cd demo/Diffusion
python3 -m pip install --upgrade pip
pip3 install -r requirements.txt
python3 -m pip install --pre --upgrade --extra-index-url https://pypi.nvidia.com tensorrt
  1. Authenticate with huggingface
huggingface-cli login
  1. Perform TensorRT optimized inference:
python3 demo_img2vid.py \
    --version svd-xt-1.1 \
    --onnx-dir /workspace/stable-video-diffusion-img2vid-xt-1-1-tensorrt \
    --engine-dir engine-svd-xt-1-1 \
    --build-static-batch \
    --use-cuda-graph \
    --input-image https://www.hdcarwallpapers.com/walls/2018_chevrolet_camaro_zl1_nascar_race_car_2-HD.jpg
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Space using stabilityai/stable-video-diffusion-img2vid-xt-1-1-tensorrt 1