Introduction
Modern image captaining relies heavily on extracting knowledge, from images such as objects, to capture the concept of static story in the image. In this paper, we propose a textual visual context dataset for captioning, where the publicly available dataset COCO caption (Lin et al., 2014) has been extended with information about the scene (such as objects in the image). Since this information has textual form, it can be used to leverage any NLP task, such as text similarity or semantic relation methods, into captioning systems, either as an end-to-end training strategy or a post-processing based approach.
Overview
We enrich COCO-caption with Textual Visual Context information. We use ResNet152, CLIP and Faster R-CNN to extract object information for each COCO-caption image. We use three filter approaches to ensure quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment to with semantic similarity to remove duplicated object. (3) semantic relatedness score as soft-label: to grantee the visual context and caption have strong relation, we use Sentence RoBERTa -SBERT uses siamese network to derive meaningfully sentence embedding that can be compared via cosine similarity- to give a soft label via cosine similarity with threshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage of the overlapping between the visual context and the caption, and to extract global information from each visual, we use BERT followed by a shallow CNN (Kim, 2014).
For quick start please have a look this colab
Dataset
Sample
|---------------+--------------+---------------+---------------------------------------------------|
| VC1 | VC2 | VC3 | human annoated caption |
| ------------- | ----------- | ------------- | ------------------------------------------------- |
| cheeseburger | plate | hotdog | a plate with a hamburger fries and tomatoes |
| bakery | dining table | website | a table having tea and a cake on it |
| gown | groom | apron | its time to cut the cake at this couples wedding |
|---------------+--------------+---------------+---------------------------------------------------|
Download
- Dowload Raw data with ID and Visual context -> original dataset with related ID caption train2014
- Downlod Data with cosine score-> soft cosine lable with th 0.2, 0.3, 0.4 and 0.5
- Dowload Overlaping visual with caption-> Overlap visual context and the human annotated caption
- Download Dataset (tsv file) 0.0-> raw data with hard lable without cosine similairty and with threshold cosine sim degree of the relation beteween the visual and caption = 0.2, 0.3, 0.4
- Download Dataset GenderBias-> man/woman replaced with person class label
For unspervied learning
- Download CC -> Caption dataset from Conceptinal Caption (CC) 2M (2255927 captions)
- Download CC+wiki -> CC+1M-wiki 3M (3255928)
- Download CC+wiki+COCO -> CC+wiki+COCO-Caption 3.5M (366984)
- Download COCO-caption+wiki -> COCO-caption +wiki 1.4M (1413915)
- Download COCO-caption+wiki+CC+8Mwiki -> COCO-caption+wiki+CC+8Mwiki 11M (11541667)