FanLu31 commited on
Commit
e501a46
·
verified ·
1 Parent(s): f137109

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,7 +14,7 @@ The CompreCap benchmark is characterized by human-annotated scene graph and focu
14
  It provides new semantic segmentation annotations for common objects in images, with an average mask coverage of 95.83%.
15
  Beyond the careful annotation of objects, CompreCap also includes high-quality descriptions of the attributes bound to the objects, as well as directional relation descriptions between the objects, composing a complete and directed scene graph structure:
16
  <div align="center">
17
- <img src="graph_anno.png" alt="CompreCap" width="1200" height="auto">
18
  </div>
19
 
20
  The annotations of segmentation masks, category names, the descriptions of attributes and relationships are saved in [./anno.json](https://huggingface.co/datasets/FanLu31/CompreCap/blob/main/anno.json). Based on the CompreCap benchmark, researchers can comprehensively accessing the quality of image captions generated by large vision-language models.
 
14
  It provides new semantic segmentation annotations for common objects in images, with an average mask coverage of 95.83%.
15
  Beyond the careful annotation of objects, CompreCap also includes high-quality descriptions of the attributes bound to the objects, as well as directional relation descriptions between the objects, composing a complete and directed scene graph structure:
16
  <div align="center">
17
+ <img src="graph_anno.png" alt="CompreCap" width="1200" height="auto">
18
  </div>
19
 
20
  The annotations of segmentation masks, category names, the descriptions of attributes and relationships are saved in [./anno.json](https://huggingface.co/datasets/FanLu31/CompreCap/blob/main/anno.json). Based on the CompreCap benchmark, researchers can comprehensively accessing the quality of image captions generated by large vision-language models.