Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -18,8 +18,6 @@ license: apache-2.0
|
|
18 |
|
19 |
The **BLIP3-OCR-200M** dataset is designed to address the limitations of current Vision-Language Models (VLMs) in processing and interpreting text-rich images, such as documents and charts. Traditional image-text datasets often struggle to capture nuanced textual information, which is crucial for tasks requiring complex text comprehension and reasoning.
|
20 |
|
21 |
-
<img src="blip3_ocr_200m_examples/blip3_ocr_200m.png" alt="Art" width=600>
|
22 |
-
<!-- <img src="blip3_ocr_200m.png" alt="Art" width=500> -->
|
23 |
|
24 |
### Key Features
|
25 |
- **OCR Integration**: The dataset incorporates Optical Character Recognition (OCR) data during the pre-training phase of VLMs. This integration enhances vision-language alignment by providing detailed textual information alongside visual data.
|
@@ -68,11 +66,6 @@ Each Parquet file contains a tabular structure with the following columns:
|
|
68 |
- **uid**: A unique identifier for the OCR data associated with each image.
|
69 |
- **ocr_num_token_larger_than_confidence_threshold**: The number of OCR tokens that exceed a specified confidence threshold(0.9 in this case).
|
70 |
|
71 |
-
### Downloading the Original DataComp Images
|
72 |
-
|
73 |
-
If you want to download the original images for each sample based on the `url` entry in the dataset, you can use the [`img2dataset`](https://github.com/rom1504/img2dataset) tool. This tool efficiently downloads images from URLs and stores them in a specified format. You can refer to the script provided by the DataComp project for downloading images. The script is available [here](https://github.com/mlfoundations/datacomp/blob/main/download_upstream.py).
|
74 |
-
|
75 |
-
|
76 |
|
77 |
### Example of Loading and Processing the Data
|
78 |
You can simply access the data by:
|
|
|
18 |
|
19 |
The **BLIP3-OCR-200M** dataset is designed to address the limitations of current Vision-Language Models (VLMs) in processing and interpreting text-rich images, such as documents and charts. Traditional image-text datasets often struggle to capture nuanced textual information, which is crucial for tasks requiring complex text comprehension and reasoning.
|
20 |
|
|
|
|
|
21 |
|
22 |
### Key Features
|
23 |
- **OCR Integration**: The dataset incorporates Optical Character Recognition (OCR) data during the pre-training phase of VLMs. This integration enhances vision-language alignment by providing detailed textual information alongside visual data.
|
|
|
66 |
- **uid**: A unique identifier for the OCR data associated with each image.
|
67 |
- **ocr_num_token_larger_than_confidence_threshold**: The number of OCR tokens that exceed a specified confidence threshold(0.9 in this case).
|
68 |
|
|
|
|
|
|
|
|
|
|
|
69 |
|
70 |
### Example of Loading and Processing the Data
|
71 |
You can simply access the data by:
|