FanLu31 commited on
Commit
f0f1aec
·
verified ·
1 Parent(s): b97535d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -14,9 +14,7 @@ The CompreCap benchmark is characterized by human-annotated scene graph and focu
14
  It provides new semantic segmentation annotations for common objects in images, with an average mask coverage of 95.83%.
15
  Beyond the careful annotation of objects, CompreCap also includes high-quality descriptions of the attributes bound to the objects, as well as directional relation descriptions between the objects, composing a complete and directed scene graph structure:
16
  <div align="center">
17
- <p>
18
- <img src="graph_anno.png" alt="CompreCap" width="500" height="auto">
19
- </p>
20
  </div>
21
 
22
  The annotations of segmentation masks, category names, the descriptions of attributes and relationships are saved in [./anno.json](https://huggingface.co/datasets/FanLu31/CompreCap/blob/main/anno.json). Based on the CompreCap benchmark, researchers can comprehensively accessing the quality of image captions generated by large vision-language models.
 
14
  It provides new semantic segmentation annotations for common objects in images, with an average mask coverage of 95.83%.
15
  Beyond the careful annotation of objects, CompreCap also includes high-quality descriptions of the attributes bound to the objects, as well as directional relation descriptions between the objects, composing a complete and directed scene graph structure:
16
  <div align="center">
17
+ <img src="graph_anno.png" alt="CompreCap" width="1000" height="auto">
 
 
18
  </div>
19
 
20
  The annotations of segmentation masks, category names, the descriptions of attributes and relationships are saved in [./anno.json](https://huggingface.co/datasets/FanLu31/CompreCap/blob/main/anno.json). Based on the CompreCap benchmark, researchers can comprehensively accessing the quality of image captions generated by large vision-language models.