Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,72 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# Vision-Language Pairs Dataset
|
6 |
+
|
7 |
+
This dataset contains metadata about image-text pairs from various popular vision-language datasets.
|
8 |
+
|
9 |
+
## Contents
|
10 |
+
|
11 |
+
- **vision_language_data/all_vision_language_images.csv**: Combined metadata for all images (75629 records)
|
12 |
+
- **vision_language_data/all_vision_language_captions.csv**: Combined captions for all images (86676 records)
|
13 |
+
- **dataset_statistics.csv**: Summary statistics for each dataset
|
14 |
+
- **category_distribution.csv**: Distribution of image categories across datasets
|
15 |
+
- **caption_length_distribution.csv**: Distribution of caption lengths
|
16 |
+
- **caption_style_distribution.csv**: Distribution of caption styles
|
17 |
+
- **category_caption_statistics.csv**: Caption statistics by category
|
18 |
+
- **vision_language_catalog.json**: Searchable catalog with sample image-caption pairs
|
19 |
+
|
20 |
+
## Datasets Included
|
21 |
+
|
22 |
+
- **COCO** (Common Objects in Context): COCO is a large-scale object detection, segmentation, and captioning dataset with multiple captions per image. (123287 images)
|
23 |
+
- **Flickr30K** (Flickr 30,000 Images): Flickr30K contains images collected from Flickr with 5 reference captions per image provided by human annotators. (31783 images)
|
24 |
+
- **Visual Genome** (Visual Genome): Visual Genome connects structured image concepts to language with detailed region descriptions and question-answer pairs. (108077 images)
|
25 |
+
- **Conceptual Captions** (Conceptual Captions): Conceptual Captions is a large-scale dataset of image-caption pairs harvested from the web and automatically filtered. (3300000 images)
|
26 |
+
- **CC3M** (Conceptual 3 Million): CC3M is a dataset of 3 million image-text pairs collected from the web, useful for vision-language pretraining. (3000000 images)
|
27 |
+
- **SBU Captions** (SBU Captioned Photo Dataset): The SBU dataset consists of 1 million images with associated captions collected from Flickr. (1000000 images)
|
28 |
+
|
29 |
+
## Fields Description
|
30 |
+
|
31 |
+
### Images Table
|
32 |
+
- **image_id**: Unique identifier for the image
|
33 |
+
- **dataset**: Source dataset name
|
34 |
+
- **image_url**: URL to the image (simulated)
|
35 |
+
- **primary_category**: Main content category
|
36 |
+
- **width**: Image width in pixels
|
37 |
+
- **height**: Image height in pixels
|
38 |
+
- **aspect_ratio**: Width divided by height
|
39 |
+
- **caption_count**: Number of captions for this image
|
40 |
+
- **license**: License under which the image is available
|
41 |
+
|
42 |
+
### Captions Table
|
43 |
+
- **caption_id**: Unique identifier for the caption
|
44 |
+
- **image_id**: ID of the associated image
|
45 |
+
- **dataset**: Source dataset name
|
46 |
+
- **text**: Caption text
|
47 |
+
- **language**: Caption language (default: en)
|
48 |
+
- **style**: Caption style (descriptive, short, or detailed)
|
49 |
+
- **length**: Number of characters in the caption
|
50 |
+
- **word_count**: Number of words in the caption
|
51 |
+
|
52 |
+
## Usage Examples
|
53 |
+
|
54 |
+
This metadata can be used for:
|
55 |
+
|
56 |
+
1. Analyzing the composition of vision-language datasets
|
57 |
+
2. Comparing caption characteristics across different datasets
|
58 |
+
3. Training and evaluating image captioning models
|
59 |
+
4. Studying linguistic patterns in image descriptions
|
60 |
+
5. Developing multimodal AI systems
|
61 |
+
|
62 |
+
## Data Generation Note
|
63 |
+
|
64 |
+
This dataset contains synthetic metadata that represents the structure and
|
65 |
+
characteristics of actual vision-language pair collections, but the specific
|
66 |
+
image and caption details are generated for demonstration purposes.
|
67 |
+
|
68 |
+
Created: 2025-04-26
|
69 |
+
|
70 |
+
|
71 |
+
## Note:
|
72 |
+
## All files are packaged into a ZIP archive called vision_language_pairs_data.zip for easy download, with expected size in the 150-200MB range, making it suitable for research and educational purposes.
|