AhmedSSabir's picture
Create README.md
96041d5
|
raw
history blame
3.92 kB
# Overview
<img align="right" width="300" height="280" src="LRCE_figure_1.png">
We enrich COCO-caption with **textual Visual Context** information. We use [ResNet152](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf), [CLIP](https://github.com/openai/CLIP) and [Faster R-CNN](https://github.com/tensorflow/models/tree/master/research/object_detection) to extract
object information for each COCO-caption image. We use three filter approaches to ensure quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment to with semantic similarity to remove duplicated object. (3) semantic relatedness score as soft-label: to grantee the visual context and caption have strong relation, we use [Sentence RoBERTa](https://www.sbert.net) -SBERT uses siamese network to derive meaningfully sentence embedding that can be compared via cosine similarity- to give a soft label via cosine similarity with **th**reshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage of the overlapping between the visual context and the caption, and to extract global information from each visual, we use BERT followed by a shallow CNN [(Kim, 2014)](https://arxiv.org/pdf/1408.5882.pdf). [colab](https://colab.research.google.com/drive/1N0JVa6y8FKGLLSpiG7hd_W75UYhHRe2j?usp=sharing)
## Dataset
### Sample
```
|---------------+--------------+---------------+---------------------------------------------------|
| VC1 | VC2 | VC3 | human annoated caption |
| ------------- | ----------- | ------------- | ------------------------------------------------- |
| cheeseburger | plate | hotdog | a plate with a hamburger fries and tomatoes |
| bakery | dining table | website | a table having tea and a cake on it |
| gown | groom | apron | its time to cut the cake at this couples wedding |
|---------------+--------------+---------------+---------------------------------------------------|
```
### Download
0. [Dowload Raw data with ID and Visual context](https://www.dropbox.com/s/xuov24on8477zg8/All_Caption_ID.csv?dl=0) -> original dataset with related ID caption [train2014](https://cocodataset.org/#download)
1. [Downlod Data with cosine score](https://www.dropbox.com/s/u1n2r2ign8v7gvh/visual_caption_cosine_score.zip?dl=0)-> soft cosine lable with **th** 0.2, 0.3, 0.4 and 0.5
2. [Dowload Overlaping visual with caption](https://www.dropbox.com/s/br8nhnlf4k2czo8/COCO_overlaping_dataset.txt?dl=0)-> Overlap visual context and the human annotated caption
3. [Download Dataset (tsv file)](https://www.dropbox.com/s/dh38xibtjpohbeg/train_all.zip?dl=0) 0.0-> raw data with hard lable without cosine similairty and with **th**reshold cosine sim degree of the relation beteween the visual and caption = 0.2, 0.3, 0.4
4. [Download Dataset GenderBias](https://www.dropbox.com/s/dh38xibtjpohbeg/train_all.zip?dl=0)-> man/woman replaced with person class label
For unspervied learning
1. [Download CC](https://www.dropbox.com/s/pc1uv2rf6nqdp57/CC_caption_40.txt.zip) -> Caption dataset from Conceptinal Caption (CC) 2M (2255927 captions)
2. [Download CC+wiki](https://www.dropbox.com/s/xuov24on8477zg8/All_Caption_ID.csv?dl=0) -> CC+1M-wiki 3M (3255928)
3. [Download CC+wiki+COCO](https://www.dropbox.com/s/k7oqwr9a1a0h8x1/CC_caption_40%2Bwiki%2BCOCO.txt.zip) -> CC+wiki+COCO-Caption 3.5M (366984)
4. [Download COCO-caption+wiki](https://www.dropbox.com/s/wc4k677wp24kzhh/COCO%2Bwiki.txt.zip) -> COCO-caption +wiki 1.4M (1413915)
5. [Download COCO-caption+wiki+CC+8Mwiki](https://www.dropbox.com/s/xhfx32sjy2z5bpa/11M_wiki_7M%2BCC%2BCOCO.txt.zip) -> COCO-caption+wiki+CC+8Mwiki 11M (11541667)