Update Edit3D-Bench dataset card with correct license, paper, project page, and code links
Browse filesThis PR improves the Edit3D-Bench dataset card by:
- Correcting the license in the metadata from Apache-2.0 to MIT, aligning with the license specified in the associated GitHub repository.
- Adding direct links to the Hugging Face paper (`https://huggingface.co/papers/2508.19247`), the main project page (`https://huanngzh.github.io/VoxHammer-Page/`), and the GitHub repository (`https://github.com/Nelipot-Lee/VoxHammer`) for improved discoverability.
- Incorporating the paper title and link into the introductory description.
- Enhancing the BibTeX citation with a `url` field pointing to the Hugging Face paper page.
README.md
CHANGED
@@ -1,18 +1,20 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
language:
|
4 |
- en
|
5 |
-
|
6 |
-
- 3D-Generation
|
7 |
-
- 3D-Edit
|
8 |
task_categories:
|
9 |
- image-to-3d
|
10 |
- text-to-3d
|
|
|
|
|
|
|
11 |
---
|
12 |
|
13 |
# Edit3D-Bench
|
14 |
|
15 |
-
|
|
|
|
|
16 |
This dataset comprises 100 high-quality 3D models, with 50 selected from Google Scanned Objects (GSO) and 50 from PartObjaverse-Tiny.
|
17 |
For each model, we provide 3 distinct editing prompts. Each prompt is accompanied by a complete set of annotated 3D assets, including
|
18 |
* original 3D asset with rendered images
|
@@ -87,11 +89,12 @@ Check details in [our github repo](https://github.com/Nelipot-Lee/VoxHammer/Edit
|
|
87 |
|
88 |
## 🧷 Citation
|
89 |
|
90 |
-
```
|
91 |
@article{li2025voxhammer,
|
92 |
title = {VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space},
|
93 |
author = {Li, Lin and Huang, Zehuan and Feng, Haoran and Zhuang, Gengxiong and Chen, Rui and Guo, Chunchao and Sheng, Lu},
|
94 |
journal = {arXiv preprint arXiv:2508.19247},
|
95 |
-
year = {2025}
|
|
|
96 |
}
|
97 |
```
|
|
|
1 |
---
|
|
|
2 |
language:
|
3 |
- en
|
4 |
+
license: mit
|
|
|
|
|
5 |
task_categories:
|
6 |
- image-to-3d
|
7 |
- text-to-3d
|
8 |
+
tags:
|
9 |
+
- 3D-Generation
|
10 |
+
- 3D-Edit
|
11 |
---
|
12 |
|
13 |
# Edit3D-Bench
|
14 |
|
15 |
+
[Paper](https://huggingface.co/papers/2508.19247) | [Project Page](https://huanngzh.github.io/VoxHammer-Page/) | [Code](https://github.com/Nelipot-Lee/VoxHammer)
|
16 |
+
|
17 |
+
**Edit3D-Bench** is a benchmark for 3D editing evaluation, introduced in the paper [VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space](https://huggingface.co/papers/2508.19247).
|
18 |
This dataset comprises 100 high-quality 3D models, with 50 selected from Google Scanned Objects (GSO) and 50 from PartObjaverse-Tiny.
|
19 |
For each model, we provide 3 distinct editing prompts. Each prompt is accompanied by a complete set of annotated 3D assets, including
|
20 |
* original 3D asset with rendered images
|
|
|
89 |
|
90 |
## 🧷 Citation
|
91 |
|
92 |
+
```bibtex
|
93 |
@article{li2025voxhammer,
|
94 |
title = {VoxHammer: Training-Free Precise and Coherent 3D Editing in Native 3D Space},
|
95 |
author = {Li, Lin and Huang, Zehuan and Feng, Haoran and Zhuang, Gengxiong and Chen, Rui and Guo, Chunchao and Sheng, Lu},
|
96 |
journal = {arXiv preprint arXiv:2508.19247},
|
97 |
+
year = {2025},
|
98 |
+
url = {https://huggingface.co/papers/2508.19247}
|
99 |
}
|
100 |
```
|