TEXGen: a Generative Diffusion Model for Mesh Textures
Abstract
While high-quality texture maps are essential for realistic 3D asset rendering, few studies have explored learning directly in the texture space, especially on large-scale datasets. In this work, we depart from the conventional approach of relying on pre-trained 2D diffusion models for test-time optimization of 3D textures. Instead, we focus on the fundamental problem of learning in the UV texture space itself. For the first time, we train a large diffusion model capable of directly generating high-resolution texture maps in a feed-forward manner. To facilitate efficient learning in high-resolution UV spaces, we propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds. Leveraging this architectural design, we train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images. Once trained, our model naturally supports various extended applications, including text-guided texture inpainting, sparse-view texture completion, and text-driven texture synthesis. Project page is at http://cvmi-lab.github.io/TEXGen/.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- RoCoTex: A Robust Method for Consistent Texture Synthesis with Diffusion Models (2024)
- Towards Multi-View Consistent Style Transfer with One-Step Diffusion via Vision Conditioning (2024)
- Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models (2024)
- MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D (2024)
- ARM: Appearance Reconstruction Model for Relightable 3D Generation (2024)
- GaussianAnything: Interactive Point Cloud Latent Diffusion for 3D Generation (2024)
- StyleTex: Style Image-Guided Texture Generation for 3D Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper