Spaces:
Runtime error
Runtime error
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,13 +1,135 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Convolutional Reconstruction Model
|
2 |
+
|
3 |
+
Official implementation for *CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model*.
|
4 |
+
|
5 |
+
**CRM is a feed-forward model which can generate 3D textured mesh in 10 seconds.**
|
6 |
+
|
7 |
+
## [Project Page](https://ml.cs.tsinghua.edu.cn/~zhengyi/CRM/) | [Arxiv](https://arxiv.org/abs/2403.05034) | [HF-Demo](https://huggingface.co/spaces/Zhengyi/CRM) | [Weights](https://huggingface.co/Zhengyi/CRM)
|
8 |
+
|
9 |
+
https://github.com/thu-ml/CRM/assets/40787266/8b325bc0-aa74-4c26-92e8-a8f0c1079382
|
10 |
+
|
11 |
+
## Try CRM π»
|
12 |
+
* Try CRM at [Huggingface Demo](https://huggingface.co/spaces/Zhengyi/CRM).
|
13 |
+
* Try CRM at [Replicate Demo](https://replicate.com/camenduru/crm). Thanks [@camenduru](https://github.com/camenduru)!
|
14 |
+
|
15 |
+
## Install
|
16 |
+
|
17 |
+
### Step 1 - Base
|
18 |
+
|
19 |
+
Install package one by one, we use **python 3.9**
|
20 |
+
|
21 |
+
```bash
|
22 |
+
pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117
|
23 |
+
pip install torch-scatter==2.1.1 -f https://data.pyg.org/whl/torch-1.13.1+cu117.html
|
24 |
+
pip install kaolin==0.14.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-1.13.1_cu117.html
|
25 |
+
pip install -r requirements.txt
|
26 |
+
```
|
27 |
+
|
28 |
+
besides, one by one need to install xformers manually according to the official [doc](https://github.com/facebookresearch/xformers?tab=readme-ov-file#installing-xformers) (**conda no need**), e.g.
|
29 |
+
|
30 |
+
```bash
|
31 |
+
pip install ninja
|
32 |
+
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
|
33 |
+
```
|
34 |
+
|
35 |
+
### Step 2 - Nvdiffrast
|
36 |
+
|
37 |
+
Install nvdiffrast according to the official [doc](https://nvlabs.github.io/nvdiffrast/#installation), e.g.
|
38 |
+
|
39 |
+
```bash
|
40 |
+
pip install git+https://github.com/NVlabs/nvdiffrast
|
41 |
+
```
|
42 |
+
|
43 |
+
|
44 |
+
|
45 |
+
## Inference
|
46 |
+
|
47 |
+
We suggest gradio for a visualized inference.
|
48 |
+
|
49 |
+
```
|
50 |
+
gradio app.py
|
51 |
+
```
|
52 |
+
|
53 |
+

|
54 |
+
|
55 |
+
For inference in command lines, simply run
|
56 |
+
```bash
|
57 |
+
CUDA_VISIBLE_DEVICES="0" python run.py --inputdir "examples/kunkun.webp"
|
58 |
+
```
|
59 |
+
It will output the preprocessed image, generated 6-view images and CCMs and a 3D model in obj format.
|
60 |
+
|
61 |
+
**Tips:** (1) If the result is unsatisfatory, please check whether the input image is correctly pre-processed into a grey background. Otherwise the results will be unpredictable.
|
62 |
+
(2) Different from the [Huggingface Demo](https://huggingface.co/spaces/Zhengyi/CRM), this official implementation uses UV texture instead of vertex color. It has better texture than the online demo but longer generating time owing to the UV texturing.
|
63 |
+
|
64 |
+
## Train
|
65 |
+
We provide training script for multivew generation and their data requirements.
|
66 |
+
To launch a simple one instance overfit training of multivew gen:
|
67 |
+
```shell
|
68 |
+
accelerate launch $accelerate_args train.py --config configs/nf7_v3_SNR_rd_size_stroke_train.yaml \
|
69 |
+
config.batch_size=1 \
|
70 |
+
config.eval_interval=100
|
71 |
+
```
|
72 |
+
To launch a simple one instance overfit training of CCM gen:
|
73 |
+
```shell
|
74 |
+
accelerate launch $accelerate_args train_stage2.py --config configs/stage2-v2-snr_train.yaml \
|
75 |
+
config.batch_size=1 \
|
76 |
+
config.eval_interval=100
|
77 |
+
```
|
78 |
+
|
79 |
+
### data prepare
|
80 |
+
To specify the data dir modify the following params in the configs/xxxx.yaml
|
81 |
+
```yaml
|
82 |
+
base_dir: <path to multiview piexl image basedir>
|
83 |
+
xyz_base: <path to related CCM image basedir>
|
84 |
+
caption_csv: <path to caption.csv>
|
85 |
+
```
|
86 |
+
The file tree of basedirs should satisfy as following:
|
87 |
+
```shell
|
88 |
+
base_dir
|
89 |
+
βββ uid1
|
90 |
+
β βββ 000.png
|
91 |
+
β βββ 001.png
|
92 |
+
β βββ 002.png
|
93 |
+
β βββ 003.png
|
94 |
+
β βββ 004.png
|
95 |
+
β βββ 005.png
|
96 |
+
βββ uid2
|
97 |
+
....
|
98 |
+
|
99 |
+
xyz_base
|
100 |
+
βββ uid1
|
101 |
+
β βββ xyz_new_000.png
|
102 |
+
β βββ xyz_new_001.png
|
103 |
+
β βββ xyz_new_002.png
|
104 |
+
β βββ xyz_new_003.png
|
105 |
+
β βββ xyz_new_004.png
|
106 |
+
β βββ xyz_new_005.png
|
107 |
+
βββ uid2
|
108 |
+
....
|
109 |
+
```
|
110 |
+
The `train_example` dir shows a minimal case of train data and `caption.csv` file.
|
111 |
+
|
112 |
+
|
113 |
+
|
114 |
+
## Todo List
|
115 |
+
- [x] Release inference code.
|
116 |
+
- [x] Release pretrained models.
|
117 |
+
- [ ] Optimize inference code to fit in low memery GPU.
|
118 |
+
- [x] Upload training code.
|
119 |
+
|
120 |
+
## Acknowledgement
|
121 |
+
- [ImageDream](https://github.com/bytedance/ImageDream)
|
122 |
+
- [nvdiffrast](https://github.com/NVlabs/nvdiffrast)
|
123 |
+
- [kiuikit](https://github.com/ashawkey/kiuikit)
|
124 |
+
- [GET3D](https://github.com/nv-tlabs/GET3D)
|
125 |
+
|
126 |
+
## Citation
|
127 |
+
|
128 |
+
```
|
129 |
+
@article{wang2024crm,
|
130 |
+
title={CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Model},
|
131 |
+
author={Zhengyi Wang and Yikai Wang and Yifei Chen and Chendong Xiang and Shuo Chen and Dajiang Yu and Chongxuan Li and Hang Su and Jun Zhu},
|
132 |
+
journal={arXiv preprint arXiv:2403.05034},
|
133 |
+
year={2024}
|
134 |
+
}
|
135 |
+
```
|