I meet with a problem unable to download the openwebtext dataset
Thank you. I meet such a problem and I find a lot of solutions but it does not work.
Traceback (most recent call last):
File "/home/lizhuoran/mydata/data_new/diffusion-llm/Score-Entropy-Discrete-Diffusion-main/load_data.py", line 4, in
dataset = load_dataset(name, cache_dir=cache_dir, download_mode="force_redownload")
File "/home/lizhuoran/anaconda3/envs/sedd/lib/python3.9/site-packages/datasets/load.py", line 2074, in load_dataset
builder_instance = load_dataset_builder(
File "/home/lizhuoran/anaconda3/envs/sedd/lib/python3.9/site-packages/datasets/load.py", line 1795, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/lizhuoran/anaconda3/envs/sedd/lib/python3.9/site-packages/datasets/load.py", line 1671, in dataset_module_factory
raise e1 from None
File "/home/lizhuoran/anaconda3/envs/sedd/lib/python3.9/site-packages/datasets/load.py", line 1617, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
File "/home/lizhuoran/anaconda3/envs/sedd/lib/python3.9/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
my test code is:
from datasets import load_dataset
import os
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
name = "wikitext"
cache_dir = "./data"
dataset = load_dataset("wikitext", name="wikitext-103-raw-v1", cache_dir=cache_dir)
Try the new branch convert-parquet-full and see if that works for you
I'm still running into this issue. Here's a minimal example:
import datasets
from datasets import load_dataset
# Load the dataset from Hugging Face Hub
print(f"\nUsing version: {datasets.__version__}\n")
dataset = load_dataset("Skylion007/openwebtext")
This results in
Using version: 2.21.0
Traceback (most recent call last):
File "/home/nsa325/work/sparse_evals/analysis/test.py", line 6, in <module>
dataset = load_dataset("Skylion007/openwebtext")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nsa325/work/sparse_evals/.venv/lib/python3.11/site-packages/datasets/load.py", line 2606, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/home/nsa325/work/sparse_evals/.venv/lib/python3.11/site-packages/datasets/load.py", line 2277, in load_dataset_builder
dataset_module = dataset_module_factory(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nsa325/work/sparse_evals/.venv/lib/python3.11/site-packages/datasets/load.py", line 1923, in dataset_module_factory
raise e1 from None
File "/home/nsa325/work/sparse_evals/.venv/lib/python3.11/site-packages/datasets/load.py", line 1875, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
^^^^^^^^
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
Try downgrade datasets to 3.6.0 with pip3 install datasets==3.6.0 -U