Datasets:

Languages:
English
ArXiv:
License:
nielsr HF Staff commited on
Commit
ed42be0
·
verified ·
1 Parent(s): 9c709ec

Update dataset card: Add comprehensive task categories, tags, abstract, and correct license

Browse files

This PR significantly improves the dataset card for MMEB-V2 by:
- Expanding the `task_categories` metadata to comprehensively reflect the benchmark's scope, including `visual-document-retrieval`, `video-retrieval`, `temporal-grounding`, and `video-question-answering` alongside the existing categories.
- Adding relevant `tags` such as `multimodal`, `embedding`, `benchmark`, `video`, `image`, `document`, `temporal-grounding`, and `moment-retrieval` for better discoverability.
- Correcting the `license` to `cc-by-nc-4.0`.
- Integrating the paper abstract into the content section to provide a detailed overview.
- Retained existing arXiv links for consistency as per instructions.

Files changed (1) hide show
  1. README.md +23 -6
README.md CHANGED
@@ -1,22 +1,39 @@
1
  ---
2
- license: apache-2.0
3
- task_categories:
4
- - visual-question-answering
5
- - video-classification
6
  language:
7
  - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  viewer: false
9
  configs:
10
  - config_name: splits
11
  data_files:
12
  - split: eval
13
  path:
14
- - "video_tasks"
15
- - "image_tasks"
16
  ---
17
 
18
  # MMEB-V2 (Massive Multimodal Embedding Benchmark)
19
 
 
 
 
 
20
  Building upon on our original [**MMEB**](https://arxiv.org/abs/2410.05160), **MMEB-V2** expands the evaluation scope to include five new tasks: four video-based tasks — Video Retrieval, Moment Retrieval, Video Classification, and Video Question Answering — and one task focused on visual documents, Visual Document Retrieval. This comprehensive suite enables robust evaluation of multimodal embedding models across static, temporal, and structured visual data settings.
21
 
22
  **This Hugging Face repository contains only the raw image and video files used in MMEB-V2, which need to be downloaded in advance.**
 
1
  ---
 
 
 
 
2
  language:
3
  - en
4
+ license: cc-by-nc-4.0
5
+ task_categories:
6
+ - visual-document-retrieval
7
+ - video-retrieval
8
+ - temporal-grounding
9
+ - video-classification
10
+ - video-question-answering
11
+ - visual-question-answering
12
+ tags:
13
+ - multimodal
14
+ - embedding
15
+ - benchmark
16
+ - video
17
+ - image
18
+ - document
19
+ - temporal-grounding
20
+ - moment-retrieval
21
  viewer: false
22
  configs:
23
  - config_name: splits
24
  data_files:
25
  - split: eval
26
  path:
27
+ - video_tasks
28
+ - image_tasks
29
  ---
30
 
31
  # MMEB-V2 (Massive Multimodal Embedding Benchmark)
32
 
33
+ ## Paper Abstract
34
+
35
+ Multimodal embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering over different modalities. However, existing multimodal embeddings like VLM2Vec, E5-V, GME are predominantly focused on natural images, with limited support for other visual forms such as videos and visual documents. This restricts their applicability in real-world scenarios, including AI agents, multi-modal search and recommendation, and retrieval-augmented generation (RAG). To close this gap, we propose VLM2Vec-V2, a unified framework for learning embeddings across diverse visual forms. First, we introduce MMEB-V2, a comprehensive benchmark that extends MMEB with five new task types: visual document retrieval, video retrieval, temporal grounding, video classification and video question answering - spanning text, image, video, and visual document inputs. Next, we train VLM2Vec-V2, a general-purpose embedding model that supports text, image, video, and visual document inputs. Extensive experiments show that VLM2Vec-V2 achieves strong performance not only on the newly introduced video and document retrieval tasks, but also improves over prior baselines on the original image benchmarks. Through extensive evaluation, our study offers insights into the generalizability of various multimodal embedding models and highlights effective strategies for unified embedding learning, laying the groundwork for more scalable and adaptable representation learning in both research and real-world settings.
36
+
37
  Building upon on our original [**MMEB**](https://arxiv.org/abs/2410.05160), **MMEB-V2** expands the evaluation scope to include five new tasks: four video-based tasks — Video Retrieval, Moment Retrieval, Video Classification, and Video Question Answering — and one task focused on visual documents, Visual Document Retrieval. This comprehensive suite enables robust evaluation of multimodal embedding models across static, temporal, and structured visual data settings.
38
 
39
  **This Hugging Face repository contains only the raw image and video files used in MMEB-V2, which need to be downloaded in advance.**