Spaces:
Running
on
Zero
Running
on
Zero
Update README.md
Browse files
README.md
CHANGED
@@ -1,82 +1,10 @@
|
|
1 |
---
|
2 |
-
title:
|
3 |
-
emoji: 🚀
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: purple
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: "4.44.1"
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
---
|
11 |
-
|
12 |
-
<div align="center">
|
13 |
-
|
14 |
-
# Xora️
|
15 |
-
|
16 |
-
</div>
|
17 |
-
|
18 |
-
This is the official repository for Xora.
|
19 |
-
|
20 |
-
## Table of Contents
|
21 |
-
|
22 |
-
- [Introduction](#introduction)
|
23 |
-
- [Installation](#installation)
|
24 |
-
- [Inference](#inference)
|
25 |
-
- [Inference Code](#inference-code)
|
26 |
-
- [Acknowledgement](#acknowledgement)
|
27 |
-
|
28 |
-
## Introduction
|
29 |
-
|
30 |
-
The performance of Diffusion Transformers is heavily influenced by the number of generated latent pixels (or tokens). In video generation, the token count becomes substantial as the number of frames increases. To address this, we designed a carefully optimized VAE that compresses videos into a smaller number of tokens while utilizing a deeper latent space. This approach enables our model to generate high-quality 768x512 videos at 24 FPS, achieving near real-time speeds.
|
31 |
-
|
32 |
-
## Installation
|
33 |
-
|
34 |
-
# Setup
|
35 |
-
|
36 |
-
The codebase currently uses Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2.
|
37 |
-
|
38 |
-
```bash
|
39 |
-
git clone https://github.com/LightricksResearch/xora-core.git
|
40 |
-
cd xora-core
|
41 |
-
|
42 |
-
# create env
|
43 |
-
python -m venv env
|
44 |
-
source env/bin/activate
|
45 |
-
python -m pip install -e .\[inference-script\]
|
46 |
-
```
|
47 |
-
|
48 |
-
Then, download the model from [Hugging Face](https://huggingface.co/Lightricks/Xora)
|
49 |
-
|
50 |
-
```python
|
51 |
-
from huggingface_hub import snapshot_download
|
52 |
-
|
53 |
-
model_path = 'PATH' # The local directory to save downloaded checkpoint
|
54 |
-
snapshot_download("Lightricks/Xora", local_dir=model_path, local_dir_use_symlinks=False, repo_type='model')
|
55 |
-
```
|
56 |
-
|
57 |
-
## Inference
|
58 |
-
|
59 |
-
### Inference Code
|
60 |
-
|
61 |
-
To use our model, please follow the inference code in `inference.py` at [https://github.com/LightricksResearch/xora-core/blob/main/inference.py]():
|
62 |
-
|
63 |
-
For text-to-video generation:
|
64 |
-
|
65 |
-
```bash
|
66 |
-
python inference.py --ckpt_dir 'PATH' --prompt "PROMPT" --height HEIGHT --width WIDTH
|
67 |
-
```
|
68 |
-
|
69 |
-
For image-to-video generation:
|
70 |
-
|
71 |
-
```python
|
72 |
-
python inference.py --ckpt_dir 'PATH' --prompt "PROMPT" --input_image_path IMAGE_PATH --height HEIGHT --width WIDTH
|
73 |
-
|
74 |
-
```
|
75 |
-
|
76 |
-
## Acknowledgement
|
77 |
-
|
78 |
-
We are grateful for the following awesome projects when implementing Xora:
|
79 |
-
|
80 |
-
- [DiT](https://github.com/facebookresearch/DiT) and [PixArt-alpha](https://github.com/PixArt-alpha/PixArt-alpha): vision transformers for image generation.
|
81 |
-
|
82 |
-
[//]: # "## Citation"
|
|
|
1 |
---
|
2 |
+
title: fastvideogen
|
3 |
+
emoji: 🚀
|
4 |
+
colorFrom: blue
|
5 |
+
colorTo: purple
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: "4.44.1"
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|