replace too GPT phrases
Browse files
README.md
CHANGED
@@ -33,12 +33,12 @@ configs:
|
|
33 |
# About This Dataset
|
34 |
|
35 |
The HuggingFaceTB team has released an impressive series of models called smollm (V1/V2) (paper: https://arxiv.org/abs/2502.02737).
|
36 |
-
According to their documentation, they used Stack-Edu as
|
37 |
|
38 |
-
However
|
39 |
-
The full dataset is stored on AWS S3
|
40 |
|
41 |
-
Fortunately, the py-edu subset is relatively small (~7 million files)
|
42 |
-
|
43 |
|
44 |
-
If this release inadvertently causes any issues for the HuggingFaceTB team, please reach out to me
|
|
|
33 |
# About This Dataset
|
34 |
|
35 |
The HuggingFaceTB team has released an impressive series of models called smollm (V1/V2) (paper: https://arxiv.org/abs/2502.02737).
|
36 |
+
According to their documentation, they used Stack-Edu as the code field corpus for pretraining and published https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus.
|
37 |
|
38 |
+
However for some reason, only a Python-Edu subset is accessible and there's no content/text field in it.
|
39 |
+
The full dataset is stored on AWS S3; downloading it requires an AWS EC2 instance, or you will be blocked by AWS's rate limits under which you will never download it.
|
40 |
|
41 |
+
Fortunately, the py-edu subset is relatively small (~7 million files) for me to afford; downloading the entire set takes approximately one hour.
|
42 |
+
I am publishing the complete py-edu dataset here for anyone who needs it.
|
43 |
|
44 |
+
If this release inadvertently causes any issues for the HuggingFaceTB team, please reach out to me and I will remove it immediately.
|