AhmedSSabir
commited on
Commit
•
db48bcd
1
Parent(s):
0ef1791
Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ Modern image captaining relies heavily on extracting knowledge, from images such
|
|
7 |
|
8 |
# Overview
|
9 |
|
10 |
-
We enrich COCO-caption with **
|
11 |
object information for each COCO-caption image. We use three filter approaches to ensure quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment to with semantic similarity to remove duplicated object. (3) semantic relatedness score as soft-label: to grantee the visual context and caption have strong relation, we use [Sentence RoBERTa](https://www.sbert.net) -SBERT uses siamese network to derive meaningfully sentence embedding that can be compared via cosine similarity- to give a soft label via cosine similarity with **th**reshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage of the overlapping between the visual context and the caption, and to extract global information from each visual, we use BERT followed by a shallow CNN [(Kim, 2014)](https://arxiv.org/pdf/1408.5882.pdf).
|
12 |
|
13 |
|
|
|
7 |
|
8 |
# Overview
|
9 |
|
10 |
+
We enrich COCO-caption with **Textual Visual Context** information. We use [ResNet152](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf), [CLIP](https://github.com/openai/CLIP) and [Faster R-CNN](https://github.com/tensorflow/models/tree/master/research/object_detection) to extract
|
11 |
object information for each COCO-caption image. We use three filter approaches to ensure quality of the dataset (1) Threshold: to filter out predictions where the object classifier is not confident enough, and (2) semantic alignment to with semantic similarity to remove duplicated object. (3) semantic relatedness score as soft-label: to grantee the visual context and caption have strong relation, we use [Sentence RoBERTa](https://www.sbert.net) -SBERT uses siamese network to derive meaningfully sentence embedding that can be compared via cosine similarity- to give a soft label via cosine similarity with **th**reshold to annotate the final label (if th > 0.2, 0.3, 0.4 then 1,0). Finally, to take advantage of the overlapping between the visual context and the caption, and to extract global information from each visual, we use BERT followed by a shallow CNN [(Kim, 2014)](https://arxiv.org/pdf/1408.5882.pdf).
|
12 |
|
13 |
|