Datasets:

Modalities:
Audio
Image
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
张绍磊 commited on
Commit
4005e4b
·
1 Parent(s): c117677
Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -1,3 +1,12 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+ SpokenVisIT
5
+
6
+ SpokenVisIT is a real-world visual-speech interaction benchmark built upon VisIT-Bench, designed to evaluate the visual-grounded speech interaction capabilities of omni large multimodal models (LMMs).
7
+
8
+ Our deepest acknowledgment goes to [VisIT-Bench](https://huggingface.co/datasets/mlfoundations/VisIT-Bench) — A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use — which collects a diverse set of real-world visual instructions. SpokenVisIT builds on this foundation by converting the textual instructions into spoken language, enabling the assessment of LMMs' capabilities in spoken interaction. **Please use SpokenVisIT under the license terms of VisIT-Bench.**
9
+
10
+ For more information on VisIT-Bench, please refer to the [paper](https://arxiv.org/abs/2308.06595), [blog](https://visit-bench.github.io/), and [code](https://github.com/mlfoundations/VisIT-Bench/).
11
+
12
+ For more information on SpokenVisIT, please refer to the [paper]() and [GitHub repo](https://github.com/ictnlp/Stream-Omni) of Stream-Omni.