metadata
dataset_info:
features:
- name: sequence
dtype: large_string
splits:
- name: train
num_bytes: 45299669517.08662
num_examples: 207228723
- name: valid
num_bytes: 2185974.456691827
num_examples: 10000
- name: test
num_bytes: 2185974.456691827
num_examples: 10000
download_size: 44646532435
dataset_size: 45304041466.00001
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
OMGProt50 with evaluation splits
Thanks Tatta Bio for putting together such an amazing dataset!
To create this version we removed IDs to save space and added the evaluations sets via random splits.
See here for a pretokenized version.