Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ This model extends the capabilities of [Open-Sora-Plan](https://github.com/PKU-Y
|
|
22 |
|
23 |
## Training Details
|
24 |
|
25 |
-
- **Training Data**: Fine-tuned on a custom dataset of 0.16 million royalty-free video-text pairs. This dataset was independently collected and curated by DATAGRID Inc., focusing on diverse scenes, motions, and objects.
|
26 |
|
27 |
## Inference Details
|
28 |
|
|
|
22 |
|
23 |
## Training Details
|
24 |
|
25 |
+
- **Training Data**: Fine-tuned on a custom dataset of 0.16 million royalty-free video-text pairs. This dataset was independently collected and curated by DATAGRID Inc., focusing on diverse scenes, motions, and objects. For V2V inpainting training data preparation, we built an automated mask generation pipeline utilizing state-of-the-art models like Meta AI's SAM2 (Segment Anything Model 2) and Microsoft's Florence2 to automatically generate masks for target objects in videos. This significantly improved efficiency and reduced costs compared to traditional manual annotation methods.
|
26 |
|
27 |
## Inference Details
|
28 |
|