File size: 3,795 Bytes
0298cee 0de1a0d 0298cee 9067ff8 0298cee 9067ff8 0298cee 9067ff8 3c51b03 4c9d784 eb956c3 0298cee 18d6cfb 6ec9f28 18d6cfb 0de1a0d e409479 8943827 0298cee 0de1a0d e409479 8943827 0298cee d85f334 c41692c d85f334 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 |
---
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: content
dtype: string
- name: score
dtype: int64
- name: poster
dtype: string
- name: date_utc
dtype: timestamp[ns]
- name: flair
dtype: string
- name: title
dtype: string
- name: permalink
dtype: string
- name: nsfw
dtype: bool
- name: updated
dtype: bool
- name: new
dtype: bool
splits:
- name: train
num_bytes: 50994948
num_examples: 98828
download_size: 31841070
dataset_size: 50994948
- config_name: year_2015
features:
- name: id
dtype: string
- name: content
dtype: string
- name: score
dtype: int64
- name: poster
dtype: string
- name: date_utc
dtype: timestamp[ns]
- name: flair
dtype: string
- name: title
dtype: string
- name: permalink
dtype: string
- name: nsfw
dtype: bool
- name: updated
dtype: bool
- name: new
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3229520
num_examples: 5774
download_size: 1995677
dataset_size: 3229520
- config_name: year_2016
features:
- name: id
dtype: string
- name: content
dtype: string
- name: score
dtype: int64
- name: poster
dtype: string
- name: date_utc
dtype: timestamp[ns]
- name: flair
dtype: string
- name: title
dtype: string
- name: permalink
dtype: string
- name: nsfw
dtype: bool
- name: updated
dtype: bool
- name: new
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 5298054
num_examples: 9701
download_size: 3351804
dataset_size: 5298054
- config_name: year_2017
features:
- name: id
dtype: string
- name: content
dtype: string
- name: score
dtype: int64
- name: poster
dtype: string
- name: date_utc
dtype: timestamp[ns]
- name: flair
dtype: string
- name: title
dtype: string
- name: permalink
dtype: string
- name: nsfw
dtype: bool
- name: updated
dtype: bool
- name: new
dtype: bool
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 6890884
num_examples: 12528
download_size: 4379140
dataset_size: 6890884
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- config_name: year_2015
data_files:
- split: train
path: year_2015/train-*
- config_name: year_2016
data_files:
- split: train
path: year_2016/train-*
- config_name: year_2017
data_files:
- split: train
path: year_2017/train-*
---
--- Generated Part of README Below ---
## Dataset Overview
The goal is to have an open dataset of [r/uwaterloo](https://www.reddit.com/r/uwaterloo/) submissions. I'm leveraging PRAW and the Reddit API to get downloads.
There is a limit of 1000 in an API call and limited search functionality, so this is run hourly to get new submissions.
## Creation Details
This dataset was created by [alvanlii/dataset-creator-reddit-uwaterloo](https://huggingface.co/spaces/alvanlii/dataset-creator-reddit-uwaterloo)
## Update Frequency
The dataset is updated hourly with the most recent update being `2024-08-29 14:00:00 UTC+0000` where we added **0 new rows**.
## Licensing
[Reddit Licensing terms](https://www.redditinc.com/policies/data-api-terms) as accessed on October 25:
[License information]
## Opt-out
To opt-out of this dataset please make a pull request with your justification and add your ids in filter_ids.json
1. Go to [filter_ids.json](https://huggingface.co/spaces/reddit-tools-HF/dataset-creator-reddit-bestofredditorupdates/blob/main/filter_ids.json)
2. Click Edit
3. Add your ids, 1 per row
4. Comment with your justification
|