Joseph0102 commited on
Commit
26ee9cd
·
verified ·
1 Parent(s): b462394

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: Cosmos
3
+ license: other
4
+ license_name: tencent-hunyuanworld-1.0-community
5
+ license_link: https://github.com/Tencent-Hunyuan/HunyuanWorld-Voyager/blob/main/LICENSE
6
+ language:
7
+ - en
8
+ - zh
9
+ tags:
10
+ - hunyuan3d
11
+ - worldmodel
12
+ - 3d-aigc
13
+ - 3d-generation
14
+ - 3d
15
+ - scene-generation
16
+ - image-to-video
17
+ pipeline_tag: image-to-3d
18
+ extra_gated_eu_disallowed: true
19
+ ---
20
+
21
+ <div align="center">
22
+ <a href=""><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"></a> &ensp;
23
+ <a href="https://arxiv.org/abs/2506.04225"><img src="https://img.shields.io/static/v1?label=Tech%20Report&message=Arxiv&color=red"></a> &ensp;
24
+ <a href="https://huggingface.co/tencent/HunyuanWorld-Voyager"><img src="https://img.shields.io/static/v1?label=HunyuanWorld-Voyager&message=HuggingFace&color=yellow"></a>
25
+ </div>
26
+
27
+ We introduce HunyuanWorld-Voyager, a novel video diffusion framework that generates world-consistent 3D point-cloud sequences from a single image with user-defined camera path. Voyager can generate 3D-consistent scene videos for world exploration following custom camera trajectories. It can also jointly generate aligned depth and RGB video for effective and direct 3D reconstruction.
28
+
29
+ ## 🔗 BibTeX
30
+
31
+ If you find [Voyager](https://arxiv.org/abs/2506.04225) useful for your research and applications, please cite using this BibTeX:
32
+
33
+ ```BibTeX
34
+ @article{huang2025voyager,
35
+ title={Voyager: Long-Range and World-Consistent Video Diffusion for Explorable 3D Scene Generation},
36
+ author={Huang, Tianyu and Zheng, Wangguandong and Wang, Tengfei and Liu, Yuhao and Wang, Zhenwei and Wu, Junta and Jiang, Jie and Li, Hui and Lau, Rynson WH and Zuo, Wangmeng and Guo, Chunchao},
37
+ journal={arXiv preprint arXiv:2506.04225},
38
+ year={2025}
39
+ }
40
+ ```
41
+
42
+
43
+
44
+ ## Acknowledgements
45
+
46
+ We would like to thank [HunyuanVideo-I2V](https://github.com/Tencent-Hunyuan/HunyuanVideo-I2V) and [HunyuanWorld](https://github.com/Tencent-Hunyuan/HunyuanWorld-1.0). We also thank [VGGT](https://github.com/facebookresearch/vggt), [MoGE](https://github.com/microsoft/MoGe), [Metric3D](https://github.com/YvanYin/Metric3D), for their open research and exploration.