--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # SketchVideo: sketch-based video generation and editing ![row01](teaser.jpg) SketchVideo aims to achieve sketch-based spatial and motion control for video generation, and supports fine-grained editing of real or synthetic videos. ## Model Details ### Model Description SketchVideo achieves sketch-based video clip generation (~6 seconds, 8 fps) based on text prompts, one or two keyframe sketches (at arbitrary time point). SketchVideo also achieves sketch-based editing for video clips (~6 seconds, 8 fps) with the same inputs. This model was trained to generate 49 video frames at a resolution of 720x480 given a context frame of the same resolution. - **Model type:** Video Diffusion Model - **Finetuned from model:** CogVideo-2b (720x480) ### Model Sources For research purpose, we recommend our Github repository (https://github.com/IGLICT/SketchVideo),
which includes detailed implementations. - **Repository:** https://github.com/IGLICT/SketchVideo - **Paper:** https://arxiv.org/abs/2503.23284 - **Project page:** http://geometrylearning.com/SketchVideo/ - **Video:** https://www.youtube.com/watch?v=eo5DNiaGgiQ ## Uses Feel free to use it under the Apache-2.0 license. Note that we don't have any official commercial product for ToonCrafter currently. ## Limitations - The generated videos are relatively short (6 seconds, FPS=8). - For two keyframe sketches, they should have consistent content. ## How to Get Started with the Model Check out https://github.com/IGLICT/SketchVideo