metadata
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: L
dtype: int64
- name: images
sequence: image
splits:
- name: train
num_bytes: 46727796
num_examples: 120
download_size: 45774717
dataset_size: 46727796
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- image-text-to-text
This dataset, LongWriter-V-22K, is used to train the LongWriter-V model for ultra-long and high-fidelity generation in vision-language models. It contains image-text pairs designed for prompting the model to generate extended text descriptions based on the input image.
The model and dataset are described in the paper: LongWriter-V: Enabling Ultra-Long and High-Fidelity Generation in Vision-Language Models