The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: images: list<item: struct<image_id: string, file_name: string, original_url: string, license: int64, height: int64, width: int64, id: int64>> annotations: list<item: struct<id: int64, image_id: int64, category_id: int64, bbox: list<item: double>, area: double, isoccluded: int64, istruncated: int64, iscrowd: int64, isdepiction: int64, isinside: int64>> categories: list<item: struct<id: int64, name: string, original_id: string>> vs img_path: string expression: string bbox: list<item: double> dataset_name: string height: int64 width: int64 Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 543, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: images: list<item: struct<image_id: string, file_name: string, original_url: string, license: int64, height: int64, width: int64, id: int64>> annotations: list<item: struct<id: int64, image_id: int64, category_id: int64, bbox: list<item: double>, area: double, isoccluded: int64, istruncated: int64, iscrowd: int64, isdepiction: int64, isinside: int64>> categories: list<item: struct<id: int64, name: string, original_id: string>> vs img_path: string expression: string bbox: list<item: double> dataset_name: string height: int64 width: int64
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Griffon v2 12M Dataset Card
News
** [2025/08/10] ** We are happy to announce that Griffon v2 is accepted to ICCV 2025.
Dataset details
We provide 12M data used in the stage 2 training of Griffon v2. In this repo, we provide the processed annotation files for Obejct Detection, REC/REG, Visual Grounding, and Non-existing Judging tasks in the paper, and also the self-collected object counting data.
Self-Counting Data
The counting data includes three parts, CT-datasets-new.tar.gz
, CountAnythingV1_clean.tar.gz
, and train_visual_openimages_cocostyle_cls601.json
. For the file ends with tar.gz, it contains both the annotation file and also the images. While for the data collected from the OpenImages, please download the OpenImages2019 train images.
Other Data
For other annotations, please download the images from the following datasets: COCO(train2014 & train2017), Visual Genemo, Objects365-2023, V3Det, and Flickrs30K Entities. If meet any problem like missing images, please contact us from the Github issues.
License
Attribution-NonCommercial 4.0 International. It should abide by the policy of the original data sources.
Citation
To use this data, please cite
@article{zhan2024griffon,
title={Griffon v2: Advancing multimodal perception with high-resolution scaling and visual-language co-referring},
author={Zhan, Yufei and Zhu, Yousong and Zhao, Hongyin and Yang, Fan and Tang, Ming and Wang, Jinqiao},
journal={arXiv preprint arXiv:2403.09333},
year={2024}
}
- Downloads last month
- 38