algo2217 commited on
Commit
50246b3
·
verified ·
1 Parent(s): 75aa472

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. README.md +13 -6
  2. data/train.parquet +3 -0
  3. dataset_info.json +4 -4
README.md CHANGED
@@ -5,7 +5,7 @@ license: mit
5
  multilinguality:
6
  - monolingual
7
  size_categories:
8
- - 1M<n<10M
9
  source_datasets:
10
  - original
11
  task_categories:
@@ -14,13 +14,20 @@ task_ids:
14
  - language-modeling
15
  ---
16
 
17
- # DSIR Pile 1M Filtered (No GitHub or DM Math)
18
 
19
- This dataset contains 1M tokens from the Pile dataset, filtered to exclude GitHub repositories and DM mathematics content.
20
 
21
  ## Dataset Description
22
 
23
- - **Size**: 1M tokens
24
- - **Filtering**: Excludes GitHub repositories and DM mathematics
25
  - **Format**: Parquet files
26
- - **Splits**: Train, validation, test
 
 
 
 
 
 
 
 
 
5
  multilinguality:
6
  - monolingual
7
  size_categories:
8
+ - 1K<n<10K
9
  source_datasets:
10
  - original
11
  task_categories:
 
14
  - language-modeling
15
  ---
16
 
17
+ # My_Downsampled_Dataset
18
 
19
+ This dataset contains 1,000,000 examples from timaeus/dsir-pile-13m-filtered-no-github-or-dm_mathematics, downsampled for efficient processing.
20
 
21
  ## Dataset Description
22
 
23
+ - **Size**: 1,000,000 examples
 
24
  - **Format**: Parquet files
25
+ - **Source**: timaeus/dsir-pile-13m-filtered-no-github-or-dm_mathematics
26
+
27
+ ## Usage
28
+
29
+ ```python
30
+ from datasets import load_dataset
31
+
32
+ dataset = load_dataset("path/to/my_downsampled_dataset")
33
+ ```
data/train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a961703c734ac75cce9992c481e4e6584511e6ed4ccaf815c7781344e72716f
3
+ size 981024755
dataset_info.json CHANGED
@@ -1,11 +1,11 @@
1
  {
2
- "dataset_name": "dsir-pile-1m-filtered-no-github-or-dm_mathematics",
3
  "dataset_type": "text",
4
  "splits": {
5
  "train": {
6
  "name": "train",
7
- "num_bytes": 0,
8
- "num_examples": 0
9
  }
10
  }
11
- }
 
1
  {
2
+ "dataset_name": "my_downsampled_dataset",
3
  "dataset_type": "text",
4
  "splits": {
5
  "train": {
6
  "name": "train",
7
+ "num_bytes": 981024755,
8
+ "num_examples": 1000000
9
  }
10
  }
11
+ }