AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning
Abstract
The AnyCap Project introduces a framework, dataset, and evaluation protocol to enhance controllability and reliability in multimodal captioning.
Controllable captioning is essential for precise multimodal alignment and instruction following, yet existing models often lack fine-grained control and reliable evaluation protocols. To address this gap, we present the AnyCap Project, an integrated solution spanning model, dataset, and evaluation. We introduce AnyCapModel (ACM), a lightweight plug-and-play framework that enhances the controllability of existing foundation models for omni-modal captioning without retraining the base model. ACM reuses the original captions from base models while incorporating user instructions and modality features to generate improved captions. To remedy the data scarcity in controllable multimodal captioning, we build AnyCapDataset (ACD), covering three modalities, 28 user-instruction types, and 300\,k high-quality data entries. We further propose AnyCapEval, a new benchmark that provides more reliable evaluation metrics for controllable captioning by decoupling content accuracy and stylistic fidelity. ACM markedly improves caption quality across a diverse set of base models on AnyCapEval. Notably, ACM-8B raises GPT-4o\'s content scores by 45\% and style scores by 12\%, and it also achieves substantial gains on widely used benchmarks such as MIA-Bench and VidCapBench.
Community
π― AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning
AnyCap Project is a unified captioning framework, dataset, and benchmark that supports image, audio, and video captioning with controllable styles. Itβs fully open-sourced, covering training, evaluation, and benchmarking!
β¨ Highlights
π Unified Multi-modal Captioning
A single framework for:
- Image Captioning
- Audio Captioning
- Video Captioning
All under one roofβwith support for modality-specific components.
π Customizable Captioning
Control the content and style of captions via single user text prompts:
- Content: Background, Event, Instance, Action, Instance Appearance, Region and so on
- Style: Brief, Detail, Genre, Length, Theme
Supports captions tailored for user needs.
π Open Benchmark & Evaluation: AnyCapEval
An industry-level benchmark with:
- Modality-specific test sets (image/audio/video)
- Content-related metrics
- Style-related metrics
Gives rise to improved accuracy and reduced variance in assessment.
π οΈ End-to-End Open Source
Everything you need is included:
- β Full training data
- β Model inference pipeline
- β Evaluation benchmark
All available under a permissive open-source license.
π Get Started
Check out the paper and code:
π Paper: arXiv:2507.12841
π¦ Code & Models: Github
π¬ Contact
For questions, collaborations, or benchmark submissions, please reach out via the paper's contact email.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper