Datasets:

Modalities:
Text
Video
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
argus / README.md
RuchitRawal's picture
Update paper link
3653110 verified
metadata
dataset_info:
  features:
    - name: clip_name
      dtype: string
    - name: human_caption
      dtype: string
  splits:
    - name: train
      num_bytes: 1544750
      num_examples: 500
  download_size: 806248
  dataset_size: 1544750
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
pretty_name: argus
license: cc-by-nc-sa-4.0
task_categories:
  - video-text-to-text
language:
  - en

ARGUS: Hallucination and Omission Evaluation in Video-LLMs

ARGUS is a framework to calculate the degree of hallucination and omission in free-form video captions.

  • ArgusCost‑H (or Hallucination-Cost) — degree of hallucinated content in the video-caption
  • ArgusCost‑O (or Omission-Cost) — degree of omitted content in the video-caption

Lower values indicate better "performance".

If you have any comments or questions, reach out to: Ruchit Rawal

Other links - WebsitePaperCode

Dataset Structure

Each row in the dataset consists of the name of the video-clip i.e. clip_name (dtype: str), and the corresponding human_caption (dtype: str). Download all the clips from here

Loading the dataset

You can load the dataset easily using the Datasets library:

from datasets import load_dataset
dataset = load_dataset("tomg-group-umd/argus")

Cite us:

TODO

Acknowledgements

The clips are collected from three primary sources: First, we utilize existing video understanding datasets [1] that already contain captions. These videos are manually verified by human authors, and received well in the community. Second, we incorporate text-to-video generation datasets [2,3], which include reference videos and short prompts. Since these prompts are insufficient for dense captioning, we manually annotate 10 such videos. Lastly, the authors curate additional videos from publicly available sources, such as YouTube, under Creative Commons licenses. We curate 30 such videos, and also manually annotated , with cross-validation among the authors.

[1] AuroraCap: Efficient, Performant Video Detailed Captioning and a New Benchmark

[2] TC-Bench: Benchmarking Temporal Compositionality in Text-to-Video and Image-to-Video Generation

[3] https://huggingface.co/datasets/finetrainers/cakeify-smol