ChartCap: Mitigating Hallucination of Dense Chart Captioning
Abstract
ChartCap, a large-scale dataset with dense, type-specific captions for real-world charts, improves caption accuracy and reduces hallucinations in vision language models.
Generating accurate, informative, and hallucination-free captions for charts remains challenging for vision language models, primarily due to the lack of large-scale, high-quality datasets of real-world charts. However, existing real-world chart datasets suffer from the inclusion of extraneous information that cannot be inferred from the chart and failure to sufficiently capture structural elements and key insights. Therefore, we introduce ChartCap, a large-scale dataset of 565K real-world chart images paired with type-specific, dense captions that exclude extraneous information and highlight both structural elements and key insights in detail. To build ChartCap, we design a four-stage pipeline that generates captions using only the discernible data from the chart and employ a cycle consistency-based human verification, which accelerates quality control without sacrificing accuracy. Additionally, we propose a novel metric, the Visual Consistency Score, which evaluates caption quality by measuring the similarity between the chart regenerated from a caption and the original chart, independent of reference captions. Extensive experiments confirms that models fine-tuned on ChartCap consistently generate more accurate and informative captions with reduced hallucinations, surpassing both open-source and proprietary models and even human-annotated captions.
Community
- ChartCap Dataset: https://huggingface.co/datasets/junyoung-00/ChartCap
- Phi-3.5-vision-instruct-ChartCap (4B): https://huggingface.co/junyoung-00/Phi-3.5-vision-instruct-ChartCap
- Webpage: https://junyoung-00.github.io/ChartCap/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing (2025)
- EXPERT: An Explainable Image Captioning Evaluation Metric with Structured Explanations (2025)
- Unblocking Fine-Grained Evaluation of Detailed Captions: An Explaining AutoRater and Critic-and-Revise Pipeline (2025)
- See Different, Think Better: Visual Variations Mitigating Hallucinations in LVLMs (2025)
- CultureCLIP: Empowering CLIP with Cultural Awareness through Synthetic Images and Contextualized Captions (2025)
- OVFact: Measuring and Improving Open-Vocabulary Factuality for Long Caption Models (2025)
- DenseWorld-1M: Towards Detailed Dense Grounded Caption in the Real World (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper