File size: 1,508 Bytes
f598877
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e452246
 
 
 
ca546d6
1e0103c
e452246
1e0103c
ca546d6
1e0103c
e452246
1e0103c
ca546d6
1e0103c
e452246
1e0103c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
dataset_info:
  features:
  - name: blob_id
    dtype: string
  - name: repo_name
    dtype: string
  - name: path
    dtype: string
  - name: length_bytes
    dtype: int64
  - name: score
    dtype: float64
  - name: int_score
    dtype: int64
  - name: text
    dtype: string
  - name: download_success
    dtype: bool
  splits:
  - name: train
    num_bytes: 13499266964
    num_examples: 7678448
  download_size: 6086016638
  dataset_size: 13499266964
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# About This Dataset

The HuggingFaceTB team has released an impressive series of models called smollm (V1/V2) (paper: https://arxiv.org/abs/2502.02737). 

According to their documentation, they used Stack-Edu as the code field corpus for pretraining and published https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus.

However for some reason, only a Python-Edu subset is accessible and there's no content/text field in it.

The full dataset is stored on AWS S3; downloading it requires an AWS EC2 instance, or you will be blocked by AWS's rate limits under which you will never download it.

Fortunately, the py-edu subset is relatively small (~7 million files) for me to afford; downloading the entire set takes approximately one hour. 

I am publishing the complete py-edu dataset here for anyone who needs it.

If this release inadvertently causes any issues for the HuggingFaceTB team, please reach out to me and I will remove it immediately.