--- license: mit tags: - image-to-3d - 3d-aigc - 3d-reconstruction - 3d-models - 3d-generation --- # Direct3D‑S2: Gigascale 3D Generation Made Easy with Spatial Sparse Attention
Teaser image of Direct3D-S2
--- ## ✨ News - May 30, 2025: 🤯 We have released both v1.0 and v1.1. The new model offers even greater speed compared to FlashAttention-2, with **12.2Ɨ** faster forward pass and **19.7Ɨ** faster backward pass, resulting in nearly **2Ɨ** inference speedup over v1.0. - May 30, 2025: šŸ”Ø Release inference code and model. - May 26, 2025: šŸŽ Release live demo on šŸ¤— [Hugging Face](https://huggingface.co/spaces/wushuang98/Direct3D-S2-v1.0-demo). - May 26, 2025: šŸš€ Release paper and project page. ## šŸ“ Abstract Generating high-resolution 3D shapes using volumetric representations such as Signed Distance Functions (SDFs) presents substantial computational and memory challenges. We introduce Direct3D‑S2, a scalable 3D generation framework based on sparse volumes that achieves superior output quality with dramatically reduced training costs. Our key innovation is the Spatial Sparse Attention (SSA) mechanism, which greatly enhances the efficiency of Diffusion Transformer (DiT) computations on sparse volumetric data. SSA allows the model to effectively process large token sets within sparse volumes, substantially reducing computational overhead and achieving a 3.9× speedup in the forward pass and a 9.6× speedup in the backward pass. Our framework also includes a variational autoencoder (VAE) that maintains a consistent sparse volumetric format across input, latent, and output stages. Compared to previous methods with heterogeneous representations in 3D VAE, this unified design significantly improves training efficiency and stability. Our model is trained on public available datasets, and experiments demonstrate that Direct3D‑S2 not only surpasses state-of-the-art methods in generation quality and efficiency, but also enables training at 10243 resolution with just 8 GPUs, a task typically requiring at least 32 GPUs for volumetric representations at 2563 resolution, thus making gigascale 3D generation both practical and accessible. ## 🌟 Highlight - **Gigascale 3D Generation**: Direct3D-S2 enables training at 10243 resolution with only 8 GPUs. - **Spatial Sparse Attention (SSA)**: A novel attention mechanism designed for sparse volumetric data, enabling efficient processing of large token sets. - **Unified Sparse VAE**: A variational autoencoder that maintains a consistent sparse volumetric format across input, latent, and output stages, improving training efficiency and stability. ## šŸš€ Getting Started ### Installation ```sh git clone https://github.com/DreamTechAI/Direct3D-S2.git cd Direct3D-S2 pip install -r requirements.txt pip install -e . ``` ### Usage ```python from direct3d_s2.pipeline import Direct3DS2Pipeline pipeline = Direct3DS2Pipeline.from_pretrained( 'wushuang98/Direct3D-S2', subfolder="direct3d-s2-v-1-1" ).to("cuda:0") mesh = pipeline( 'assets/test/13.png', sdf_resolution=1024, # 512 or 1024 remesh=False, # Switch to True if you need to reduce the number of triangles. )["mesh"] mesh.export('output.obj') ``` ### Web Demo We provide a Gradio web demo for Direct3D-S2, which allows you to generate 3D meshes from images interactively. ```bash python app.py ``` ## šŸ¤— Acknowledgements Thanks to the following repos for their great work, which helps us a lot in the development of Direct3D-S2: - [Trellis](https://github.com/microsoft/TRELLIS) - [SparseFlex](https://github.com/VAST-AI-Research/TripoSF) - [native-sparse-attention-triton](https://github.com/XunhaoLai/native-sparse-attention-triton) - [diffusers](https://github.com/huggingface/diffusers) ## šŸ“„ License Direct3D-S2 is released under the MIT License. See [LICENSE](LICENSE) for details. ## šŸ“– Citation If you find our work useful, please consider citing our paper: ```bibtex @article{wu2025direct3ds2gigascale3dgeneration, title={Direct3D-S2: Gigascale 3D Generation Made Easy with Spatial Sparse Attention}, author={Shuang Wu and Youtian Lin and Feihu Zhang and Yifei Zeng and Yikang Yang and Yajie Bao and Jiachen Qian and Siyu Zhu and Philip Torr and Xun Cao and Yao Yao}, journal={arXiv preprint arXiv:2505.17412}, year={2025} } ```