
Datasets:
AnimeText: A Large-scale Dataset for Robust Complex Anime Scene Text Detection


Dataset Summary
AnimeText is a large-scale dataset for anime scene text detection, containing 735K images and 4.2M annotated text blocks. Unlike existing natural scene or document-centric text detection datasets, AnimeText focuses on text in anime scenes, which typically features diverse styles, irregular arrangements, and can be easily confused with complex visual elements such as symbols and decorative patterns. The AnimeText dataset includes multilingual text (English, Chinese, Japanese, Korean, Russian) and provides hierarchical annotations and hard negative samples tailored for anime scenarios.
We trained YOLO12 models on this dataset, and shared them here. We also provided online demo for you to try them quickly.
Features
- Large-scale Diversity: Contains 735K high-resolution anime images and 4.2M annotated text blocks
- Multilingual Support: Covers English, Chinese, Japanese, Korean, Russian, and other languages
- Hierarchical Annotations: Provides multi-granularity text area annotations for more precise text detection
- Hard Negative Samples: Includes annotations of hard negative samples that are easily confused with text, improving model robustness
- Polygon Annotations: Uses segmentation models to provide accurate polygon annotations for irregular text


Dataset Structure
Data Instances
A data instance includes an image and its corresponding text annotations:
{
'image': Image (image data),
'image_id': int32 (image id),
'objects': {
# The bbox is the normalized value
'bbox': [float32 (center x), float32 (center y), float32 (width), float32 (height)],
'category': string (class id: {0:text, 1: hard negative sample}),
},
}
Data Splits
The dataset is divided into training, validation, and test sets:
Split | Number of Images | Number of Text Instances |
---|---|---|
Train | 514,144 | 3,228,144 |
Validation | 147,191 | 922,758 |
Test | 73,725 | 88,320 |
Total | 735,060 | 4,239,222 |
Supported Tasks
- Text Detection: The dataset can be used to train and evaluate text detection models in anime scenes.
- Hard Negative Sample Discrimination: The hard negative samples in the dataset can be used to train models to distinguish between real text and text-like patterns.
Usage
With datasets
Install datasets
with vision support:
pip install datasets[vision]
Load AnimeText with datasets
:
from datasets import load_dataset, Image
dataset = load_dataset("deepghs/AnimeText", split="train") # split choice in ['train', 'valid', 'test']
# data example
example = dataset[0]
image = example['image'] # PIL.Image
bbox = example['objects']['bbox']
# in category, 0 means positive samples, 1 means hard negative samples
category = example['objects']['category']
# iterate dataset
for data in dataset:
...
With ultralytics
This dataset can also be used with ultralytics framework with the following steps:
- Download the zip files inside the ./ultralytics subdirectory in this repository.
- Extract them with
unzip -o
command to the same directory (and you can simply ignore the overwriting reminder because all these 5 splits are sharing the samedata.yaml
file) - Train on this dataset with ultralytics training CLI,
Benchmarks
Cross-dataset benchmarking was conducted using state-of-the-art text detection methods on AnimeText. Experimental results demonstrate that models trained on AnimeText outperform those trained on existing datasets in anime scenes.
Method | Train Dataset | Test Dataset | Precision | Recall | F1-score | mAP50:95 |
---|---|---|---|---|---|---|
YOLO v11 | ICDAR15 | ICDAR15 | 0.657 | 0.495 | 0.565 | 0.291 |
YOLO v11 | AnimeText | ICDAR15 | 0.187 | 0.31 | 0.233 | 0.083 |
YOLO v11 | ICDAR15 | AnimeText | 0.0629 | 0.106 | 0.079 | 0.01 |
YOLO v11 | AnimeText | AnimeText | 0.878 | 0.825 | 0.851 | 0.806 |
Bridging Text Spotting | ICDAR15 | AnimeText | 0.035 | 0.189 | 0.056 | - |
Acknowledgements
We thank all annotators and volunteers who participated in building the AnimeText dataset. This research was supported by DeepGHS.
- Downloads last month
- 66
Models trained or fine-tuned on deepghs/AnimeText
