unofficial-pyedu / README.md
Leon-Leee's picture
Update README.md
ca546d6 verified
metadata
dataset_info:
  features:
    - name: blob_id
      dtype: string
    - name: repo_name
      dtype: string
    - name: path
      dtype: string
    - name: length_bytes
      dtype: int64
    - name: score
      dtype: float64
    - name: int_score
      dtype: int64
    - name: text
      dtype: string
    - name: download_success
      dtype: bool
  splits:
    - name: train
      num_bytes: 13499266964
      num_examples: 7678448
  download_size: 6086016638
  dataset_size: 13499266964
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

About This Dataset

The HuggingFaceTB team has released an impressive series of models called smollm (V1/V2) (paper: https://arxiv.org/abs/2502.02737).

According to their documentation, they used Stack-Edu as the code field corpus for pretraining and published https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus.

However for some reason, only a Python-Edu subset is accessible and there's no content/text field in it.

The full dataset is stored on AWS S3; downloading it requires an AWS EC2 instance, or you will be blocked by AWS's rate limits under which you will never download it.

Fortunately, the py-edu subset is relatively small (~7 million files) for me to afford; downloading the entire set takes approximately one hour.

I am publishing the complete py-edu dataset here for anyone who needs it.

If this release inadvertently causes any issues for the HuggingFaceTB team, please reach out to me and I will remove it immediately.