Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowTypeError
Message:      ("Expected bytes, got a 'list' object", 'Conversion failed for column templ_values with type object')
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 137, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3357, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2111, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2315, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 167, in _generate_tables
                  pa_table = pa.Table.from_pandas(df, preserve_index=False)
                File "pyarrow/table.pxi", line 3874, in pyarrow.lib.Table.from_pandas
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 624, in dataframe_to_arrays
                  arrays[i] = maybe_fut.result()
                File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
                  return self.__get_result()
                File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
                  raise self._exception
                File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run
                  result = self.fn(*self.args, **self.kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 598, in convert_column
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/pandas_compat.py", line 592, in convert_column
                  result = pa.array(col, type=type_, from_pandas=True, safe=safe)
                File "pyarrow/array.pxi", line 339, in pyarrow.lib.array
                File "pyarrow/array.pxi", line 85, in pyarrow.lib._ndarray_to_array
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: ("Expected bytes, got a 'list' object", 'Conversion failed for column templ_values with type object')

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

MUSIC-AVQA-v2.0

Data Release for the paper Tackling Data Bias in MUSIC-AVQA: Crafting a Balanced Dataset for Unbiased Question-Answering (Accepted by WACV 2024) by Xiulong Liu, Zhikang Dong and Peng Zhang.

The paper solves the data bias issue of original MUSIC-AVQA dataset by manually collecting 1230 musical instrument performance videos along with 8.1k newly created QA pairs complementing to the original biased QA set.

We include additional videos data and QA pairs after balancing the original MUSIC-AVQA Dataset under this repository. The videos are manually collected from YouTube, and is used under research purpose only. We release the dataset for 2 parts: i). Videos via YouTube-ids along with the trimmed start-time and duration for each video, ii). Entire training and test QA set for MUSIC-AVQA-v2.0, along with 1040 additional videos (preprocessed and cut to 60s) to the existing MUSIC-AVQA which can be downloaded through this link. To access the original MUSIC-AVQA videos, you could directly find the links from MUSIC-AVQA Dataset.

Instructions

  1. To download additional videos, see MUSIC-AVQA-v2.0_additional_videos.csv file under the data folder. There are 4 fields for each video: i). video_id: The YouTube video ID.
    ii). start_time: cutting beginning time.
    iii). prefix: There are a few YouTube videos whose more than 1 segment are extracted, so we add prefix like "#02", "#03" after the video_id to name the videos to avoid duplication.
    iv). has_flip: We flip some videos horizontally and pair them with symmetric QA pairs, for those marked with Y, two videos are associated, one for original video like xxx.mp4 and the other named xxx_flip.mp4 standing for the flipped version.
  2. The MUSIC-AVQA-v2.0 'full' balanced QA dataset is provided under data/balance_full_set folder. We provide the entire train and test split associated with all videos including the original MUSIC-AVQA videos and new videos collected by us. This new QA dataset not only balances the original dataset, but also corrects QA pairs with problematic annotations in the original dataset. For more details on how we balanced the original dataset, please refer to the paper.

Benchmark results

Feel free to submit your benchmark results at the Paper-With-Code benchmark leaderboard: https://paperswithcode.com/sota/on-music-avqa-v2-0.

If you find our improvements on the AVQA dataset useful, please consider citing our paper and original MUSIC-AVQA.

Downloads last month
0