fix metdata paths - wiki,pes2o

#10
Ai2 org
No description provided.
baileyk changed pull request status to open
baileyk changed pull request title from fix metdata paths - wiki, dclm, pes2o to fix metdata paths - wiki,pes2o
Ai2 org

Slight fix from https://huggingface.co/datasets/allenai/olmo-mix-1124/discussions/8 -- wiki and pes2o don't have the same nested paths as the other datasets, so reverted these two

baileyk changed pull request status to merged

Thanks for merging the PR. Didn't it work with the wildcard for pes2o?
Also thanks for adding the correct features. I believe that makes streaming the dataset overall a bit nicer 🙃

Hm, unfortunately I can't load the dataset anymore without specifying the pr/8 revision, something seems malformed with the README.Md, I guess.

dataset = load_dataset("allenai/olmo-mix-1124",
    split="train",
    name="arxiv",
    data_files={"train": "data/arxiv/train/*"},
    streaming=True
    )

Gives: Using the latest cached version of the dataset since allenai/olmo-mix-1124 couldn't be found on the Hugging Face Hub

This is independent of streaming or specifying data_files or using another subset.

Sorry about that, can you try again? There was a typo in the most recent merge for the datatypes, which just got fixed! I just tried your code above and it ran for me, but let me know if you're still having issues. Thanks for your patience!

I'm getting different couldn't cast-errors depending on which subset I try to download with a call to load_dataset(..)as above.
E.G.
Algebraic-stack:
TypeError: Couldn't cast array of type struct<paloma_paragraphs: list<item: list<item: int64>>> to string
Wiki:
TypeError: Couldn't cast array of type struct<length: int64, provenance: string, revid: string, url: string> to string
Arxiv:
TypeError: Couldn't cast array of type struct<paloma_paragraphs: list<item: null>> to string

These are the ones that I have tested so far. Whether Streaming or not doesn't really make a difference I think, although Arxiv did start to download (crashed due to not enough disk space on my end) and the others already crash when the download starts. However when streaming arxiv, the first call of next(iter(dataset))produces the error.

Ah, it seems the features "fix" we put in actually is resulting in these errors. I will work on fixing this ASAP. For now, if you need to download the dataset, we recommend downloading the dataset in bulk. You can do this with the following steps:

https://gist.github.com/padeoe/697678ab8e528b85a2a7bddafea1fa4f

  1. install aria2c
  2. run ./hfd.sh allenai/dolma --dataset --tool aria2c -x 4 j 16 (this will download 16 files in parallel, each using 4 threads)
    This should bypass the issues with load_dataset.

Sorry again for the inconvenience. We are hoping to resolve this issue so that future datasets won't have this issue with load_dataset.

@idwnunohbru I just spoke with our friend at Hugging Face who is helping with this issue. He had to make a fix on their end, and said that installing datasets from source will fix the issue until a new release is rolled out. This worked for me -- can you try:

pip install git+https://github.com/huggingface/datasets.git

then run your command again:

from datasets import load_dataset

ds = load_dataset(
    "allenai/olmo-mix-1124",
    name="arxiv", 
    split="train",
    streaming=True,
)

print(next(iter(ds)))

This now gets rid of the error and returns : {'text': '\\section{Introduction}\n\nThis paper studies the stability of traveling wave solutions to scalar hyperbolic equations of the form\n\\begin{equation}......

Hi again,

I was on leave and conference for the last weeks, so sorry for the long silence.
I installed from the repo and now most of the problems are fixed, however, DCLM still throws a cast error:

from datasets import load_dataset

ds = load_dataset(
    "allenai/olmo-mix-1124",
    name="dclm", 
    split="train",
    streaming=True,
)

print(next(iter(ds)))

Throws:

CastError: Couldn't cast
bff_contained_ngram_count_before_dedupe: int64
language_id_whole_page_fasttext: struct<en: double>
  child 0, en: double
metadata: string
previous_word_count: int64
text: string
url: string
warcinfo: string
fasttext_openhermes_reddit_eli5_vs_rw_v2_bigram_200k_train_prob: double
version: string
added: string
created: string
doc: string
id: string
source: string
attributes: string
to
{'text': Value('string'), 'added': Value('string'), 'created': Value('string'), 'attributes': Value('string'), 'doc': Value('string'), 'id': Value('string'), 'metadata': Value('string'), 'source': Value('string'), 'version': Value('string')}
because column names don't match

I suspect that to be an issue with the README.MD as it works when using revision="refs/pr/8" when creating ds, FYI

I'm working on a more permanent fix, which I should have next week. For now, you should be able to manually specify the features and pass them in, as a workaround. This works for me:

from datasets import load_dataset, Features, Value

features = Features({
    "text": Value("string"),
    "added": Value("string"),
    "created": Value("string"),
    "attributes": Value("string"),
    "doc": Value("string"),
    "id": Value("string"),
    "metadata": Value("string"),
    "source": Value("string"),
    "version": Value("string"),
    "bff_contained_ngram_count_before_dedupe": Value("int64"),
    "previous_word_count": Value("int64"),
    "url": Value("string"),
    "warcinfo": Value("string"),
    "fasttext_openhermes_reddit_eli5_vs_rw_v2_bigram_200k_train_prob": Value("float64"),
    "language_id_whole_page_fasttext": {
        "en": Value("float64")
    },
})

ds = load_dataset(
    "allenai/olmo-mix-1124",
    name="dclm",
    split="train",
    streaming=True,
    features=features,
)

print(next(iter(ds)))

which now instead of the CastError, gives:
{'text': 'Take the 2-minute tour ×\n\nHere what happened with me today. TimeMachine asked me whether I want to set a backup disk, I\'ve answered yes, but then, when I\'ve ........

Sign up or log in to comment