
asigalov61/Orpheus-Music-Transformer
Updated
•
4
Error code: StreamingRowsError Exception: CastError Message: Couldn't cast mid: binary __key__: string __url__: string png: null to {'png': Image(mode=None, decode=True, id=None), '__key__': Value(dtype='string', id=None), '__url__': Value(dtype='string', id=None)} because column names don't match Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2285, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1888, in _iter_arrow pa_table = cast_table_to_features(pa_table, self.features) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2215, in cast_table_to_features raise CastError( datasets.table.CastError: Couldn't cast mid: binary __key__: string __url__: string png: null to {'png': Image(mode=None, decode=True, id=None), '__key__': Value(dtype='string', id=None), '__url__': Value(dtype='string', id=None)} because column names don't match
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
# It is recommended that you upgrade pip and setuptools prior to install for max compatibility
!pip install --upgrade pip
!pip install --upgrade setuptools
# The following command will install Godzilla MIDI Dataset for CPU-only search
# Please note that CPU search is quite slow and it requires a minimum of 128GB RAM to work for full searches
!pip install -U godzillamididataset
# The following command will install Godzilla MIDI Dataset for fast GPU search
# Please note that GPU search requires at least 30GB GPU VRAM for full searches at float16 precision
!pip install -U godzillamididataset[gpu]
# The following command will install packages for Fast Parallel Extract module
# It will allow you to extract (untar) Godzilla MIDI Dataset much faster
!sudo apt update -y
!sudo apt install -y p7zip-full
!sudo apt install -y pigz
# The following command will install packages for midi_to_colab_audio module
# It will allow you to render Godzilla MIDI Dataset MIDIs to audio
!sudo apt update -y
!sudo apt install fluidsynth
# Import main Godzilla MIDI Dataset module
import godzillamididataset
# Download Godzilla MIDI Dataset from Hugging Face repo
godzillamididataset.download_dataset()
# Extract Godzilla MIDI Dataset with built-in function (slow)
godzillamididataset.parallel_extract()
# Or you can extract much faster if you have installed the optional packages for Fast Parallel Extract
# from godzillamididataset import fast_parallel_extract
# fast_parallel_extract.fast_parallel_extract()
# Load all MIDIs basic signatures
sigs_data = godzillamididataset.read_jsonl()
# Create signatures dictionaries
sigs_dicts = godzillamididataset.load_signatures(sigs_data)
# Pre-compute signatures
X, global_union = godzillamididataset.precompute_signatures(sigs_dicts)
# Run the search
# IO dirs will be created on the first run of the following function
# Do not forget to put your master MIDIs into created Master-MIDI-Dataset folder
# The full search for each master MIDI takes about 2-3 sec on a GPU and 4-5 min on a CPU
godzillamididataset.search_and_filter(sigs_dicts, X, global_union)
Godzilla-MIDI-Dataset/ # Dataset root dir
├── ARTWORK/ # Concept artwork
│ ├── Illustrations/ # Concept illustrations
│ ├── Logos/ # Dataset logos
│ └── Posters/ # Dataset posters
├── CODE/ # Supplemental python code and python modules
├── DATA/ # Dataset (meta)data dir
│ ├── Averages/ # Averages data for all MIDIs and clean MIDIs
│ ├── Basic Features/ # All basic features for all clean MIDIs
│ ├── Files Lists/ # Files lists by MIDIs types and categories
│ ├── Identified MIDIs/ # Comprehensive data for identified MIDIs
│ ├── Metadata/ # Raw metadata from all MIDIs
│ ├── Mono Melodies/ # Data for all MIDIs with monophonic melodies
│ ├── Pitches Patches Counts/ # Pitches-patches counts for all MIDIs
│ ├── Pitches Sums/ # Pitches sums for all MIDIs
│ ├── Signatures/ # Signatures data for all MIDIs and MIDIs subsets
│ └── Text Captions/ # Music description text captions for all MIDIs
├── MIDIs/ # Root MIDIs dir
└── SOUNDFONTS/ # Select high-quality soundfont banks to render MIDIs
@misc{GodzillaMIDIDataset2025,
title = {Godzilla MIDI Dataset: Enormous, comprehensive, normalized and searchable MIDI dataset for MIR and symbolic music AI purposes},
author = {Alex Lev},
publisher = {Project Los Angeles / Tegridy Code},
year = {2025},
url = {https://huggingface.co/datasets/projectlosangeles/Godzilla-MIDI-Dataset}
@misc {breadai_2025,
author = { {BreadAi} },
title = { Sourdough-midi-dataset (Revision cd19431) },
year = 2025,
url = {\url{https://huggingface.co/datasets/BreadAi/Sourdough-midi-dataset}},
doi = { 10.57967/hf/4743 },
publisher = { Hugging Face }
}
@inproceedings{bradshawaria,
title={Aria-MIDI: A Dataset of Piano MIDI Files for Symbolic Music Modeling},
author={Bradshaw, Louis and Colton, Simon},
booktitle={International Conference on Learning Representations},
year={2025},
url={https://openreview.net/forum?id=X5hrhgndxW},
}
@misc{TegridyMIDIDataset2025,
title = {Tegridy MIDI Dataset: Ultimate Multi-Instrumental MIDI Dataset for MIR and Music AI purposes},
author = {Alex Lev},
publisher = {Project Los Angeles / Tegridy Code},
year = {2025},
url = {https://github.com/asigalov61/Tegridy-MIDI-Dataset}