--- dataset_info: features: - name: id dtype: string - name: image dtype: image - name: obj_text sequence: string - name: ref_ids sequence: int64 - name: json_data list: - name: ref_id dtype: int64 - name: text sequence: string - name: depth_caption dtype: string - name: pred_res sequence: int64 splits: - name: train num_bytes: 12965187.0 num_examples: 100 download_size: 12935674 dataset_size: 12965187.0 configs: - config_name: default data_files: - split: train path: data/train-* --- # Som_bench_refcocog_refseg Dataset This dataset is a processed version of the RefCOCOg dataset and is intended to be used as part of a benchmark, specifically mirroring the data splits and format used in the [Set-of-Mark (SoM)](https://github.com/microsoft/SoM/tree/main/benchmark) benchmark. It is designed for evaluating visual grounding and related tasks. **Original Dataset:** This dataset is based on the [RefCOCOg](https://github.com/lichengunc/refer) dataset. Please refer to the original RefCOCOg dataset for its terms of use and licensing. **Benchmark Reference:** This dataset follows the benchmark setup described in the following repository: * [Set-of-Mark (SoM) Benchmark](https://github.com/microsoft/SoM/tree/main/benchmark) **Citation (SoM):** If you use this *benchmark setup* in your research, please cite the following paper: ```bibtex @article{{yang2023setofmark, title={{Set-of-Mark Prompting Unleashes Extraordinary Visual Grounding in GPT-4V}}, author={{Jianwei Yang and Hao Zhang and Feng Li and Xueyan Zou and Chunyuan Li and Jianfeng Gao}}, journal={{arXiv preprint arXiv:2310.11441}}, year={{2023}}, }}