AhmedSSabir commited on
Commit
3b78033
1 Parent(s): 52497dc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -6
README.md CHANGED
@@ -1,20 +1,31 @@
1
 
2
  # Introduction
3
 
4
- Modern image captaining relies heavily on extracting knowledge, from images such as objects, to capture the concept of static story in the image. In this paper, we propose a textual visual context dataset for captioning, where the publicly available dataset COCO caption (Lin et al., 2014) has been extended with information about the scene (such as objects in the image). Since this information has textual form, it can be used to leverage any NLP task, such as text similarity or semantic relation methods, into captioning systems, either as an end-to-end training strategy or a post-processing based approach.
 
 
 
 
5
 
6
- Please refer to [Github](https://github.com/ahmedssabir/Visual-Semantic-Relatedness-Dataset-for-Image-Captioning) for more information.
7
 
8
 
9
 
10
 
11
  # Overview
12
 
13
- We enrich COCO-caption with **Textual Visual Context** information. We use [ResNet152](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf), [CLIP](https://github.com/openai/CLIP) and [Faster R-CNN](https://github.com/tensorflow/models/tree/master/research/object_detection) to extract
14
- object information for each COCO-caption image. We use three filter approaches to ensure quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment to with semantic similarity to remove duplicated object. (3) semantic relatedness score as soft-label: to grantee the visual context and caption have strong relation, we use [Sentence RoBERTa](https://www.sbert.net) -SBERT uses siamese network to derive meaningfully sentence embedding that can be compared via cosine similarity- to give a soft label via cosine similarity with **th**reshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage of the overlapping between the visual context and the caption, and to extract global information from each visual, we use [BERT followed by a shallow CNN](https://huggingface.co/AhmedSSabir/BERT-CNN-Visual-Semantic) [(Kim, 2014)](https://arxiv.org/pdf/1408.5882.pdf).
 
 
 
 
 
 
 
 
15
 
16
-
17
- For quick start please have a look this [colab](https://colab.research.google.com/drive/1N0JVa6y8FKGLLSpiG7hd_W75UYhHRe2j?usp=sharing)
18
 
19
 
20
 
 
1
 
2
  # Introduction
3
 
4
+ Modern image captaining relies heavily on extracting knowledge, from images such as objects,
5
+ to capture the concept of static story in the image. In this paper, we propose a textual visual context dataset
6
+ for captioning, where the publicly available dataset COCO caption (Lin et al., 2014) has been extended with information
7
+ about the scene (such as objects in the image). Since this information has textual form, it can be used to leverage any NLP task,
8
+ such as text similarity or semantic relation methods, into captioning systems, either as an end-to-end training strategy or a post-processing based approach.
9
 
10
+ Please refer to [project page](https://sabirdvd.github.io/project_page/Dataset_2022/index.html) and [Github](https://github.com/ahmedssabir/Visual-Semantic-Relatedness-Dataset-for-Image-Captioning) for more information.
11
 
12
 
13
 
14
 
15
  # Overview
16
 
17
+ We enrich COCO-Caption with textual Visual Context information. We use ResNet152, CLIP,
18
+ and Faster R-CNN to extract object information for each image. We use three filter approaches
19
+ to ensure the quality of the dataset (1) Threshold: to filter out predictions where the object classifier
20
+ is not confident enough, and (2) semantic alignment with semantic similarity to remove duplicated objects.
21
+ (3) semantic relatedness score as soft-label: to guarantee the visual context and caption have a strong
22
+ relation. In particular, we use Sentence-RoBERTa via cosine similarity to give a soft score, and then
23
+ we use a threshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage
24
+ of the visual overlap between caption and visual context,
25
+ and to extract global information, we use BERT followed by a shallow CNN (<a href="https://arxiv.org/abs/1408.5882">Kim, 2014</a>)
26
+ to estimate the visual relatedness score.
27
 
28
+ For quick start please have a look this [demo](https://github.com/ahmedssabir/Textual-Visual-Semantic-Dataset/blob/main/BERT_CNN_Visual_re_ranker_demo.ipynb)
 
29
 
30
 
31