Streaming mode broken

#2
by sanchit-gandhi HF staff - opened

Hey @polinaeterna ! We've got some Whisper event participants who are excited about using this dataset! It currently doesn't work with streaming mode (see dataset viewer). Is this an issue with audio folder or how we're processing the data? Thank you! ๐Ÿ™Œ

hi! for some reason it detects different loader formats for splits ("text" for train and validation, "audiofolder" for test). I'm investigating it!

@sanchit-gandhi the problem is that this dataset doesn't have a loading script so the library tries to load it with one of packaged modules. "train" and "valid" archives contain both .wav and .txt files (one .txt file per a .wav file) so the module to be used cannot be correctly inferred by file extensions.

In order for this dataset to be loaded with audiofolder packaged module, transcriptions for a split should be put in a single metadata.csv/metadata.jsonl metadata file within each archive, according to the documentation: https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata. Note that "test" archives should also contain metadata file! Otherwise the list of features for splits wouldn't match and it will result in an error. In a test metadata file one can put empty strings ("") in transcription field. I would use jsonl format to avoid possible types detection error ("" can be converted to float in csv if there are only empty strings in a column I assume).

But I have no idea why there are these Pickle imports detected notes next to files in a repo. There shouldn't be any I believe...

Thanks for the details @polinaeterna ! If anyone is interested in using this dataset and would like to have a go at re-organising the data, feel free to ping myself and @polinaeterna on this thread - we can guide you through the steps you can take to organise the data according to the documentation! https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata

Essentially, this will involve:

  1. Downloading all the data to a local device (git clone the repo)
  2. Restructuring the data according to the docs
  3. Testing that the new organisation works
  4. Push the newly organised data back to the Hub
  5. Use the dataset with load_dataset!

Sign up or log in to comment