Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -26,27 +26,36 @@ This dataset contains all posts submitted to StackOverflow before the 14th of Ju
|
|
| 26 |
The dataset contains over 60 Million posts, totaling ~40GB in size and ~65 billion characters of text.<br>
|
| 27 |
The data is sourced from [Internet Archive StackExchange Data Dump](https://archive.org/download/stackexchange).
|
| 28 |
|
| 29 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
```typescript
|
| 31 |
{
|
| 32 |
Id: long,
|
| 33 |
-
PostTypeId: long
|
| 34 |
-
AcceptedAnswerId: long | null,
|
| 35 |
-
ParentId: long | null,
|
| 36 |
Score: long,
|
| 37 |
ViewCount: long | null,
|
| 38 |
Body: string | null,
|
| 39 |
-
Title: string | null
|
| 40 |
ContentLicense: string | null,
|
| 41 |
FavoriteCount: long | null,
|
| 42 |
CreationDate: string | null,
|
| 43 |
LastActivityDate: string | null,
|
| 44 |
LastEditDate: string | null,
|
| 45 |
LastEditorUserId: long | null,
|
| 46 |
-
OwnerUserId: long | null
|
| 47 |
Tags: array<string> | null
|
| 48 |
}
|
| 49 |
```
|
|
|
|
|
|
|
| 50 |
|
| 51 |
## How to use?
|
| 52 |
|
|
@@ -142,9 +151,12 @@ The value `field` of a `TextNode` contains the free standing text without any fu
|
|
| 142 |
- For each element of `element.select("th")`
|
| 143 |
- `${element.text()} | `
|
| 144 |
- After the loop `\n| ` is emitted
|
| 145 |
-
-
|
|
|
|
|
|
|
|
|
|
| 146 |
- `\n` is emitted
|
| 147 |
-
- For each element of `element.select("tr")`
|
| 148 |
- `| ` is emitted
|
| 149 |
- For each element of `element.select("td")`
|
| 150 |
- `${td.text()} | ` is emitted
|
|
|
|
| 26 |
The dataset contains over 60 Million posts, totaling ~40GB in size and ~65 billion characters of text.<br>
|
| 27 |
The data is sourced from [Internet Archive StackExchange Data Dump](https://archive.org/download/stackexchange).
|
| 28 |
|
| 29 |
+
## Dataset Structure
|
| 30 |
+
|
| 31 |
+
Each record corresponds to one post of a particular type.
|
| 32 |
+
Original ordering from the data dump is not exactly preserved due to parallelism in the script used to process the data dump.
|
| 33 |
+
The markdown content of each post is contained in the `Body` field. The license for a particular post is contained in the `ContentLicense` field.
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
### Data Fields
|
| 37 |
```typescript
|
| 38 |
{
|
| 39 |
Id: long,
|
| 40 |
+
PostTypeId: long, // 1=Question, 2=Answer, 3=Orphaned tag wiki, 4=Tag wiki excerpt, 5=Tag wiki, 6=Moderator nomination, 7=Wiki Placeholder, 8=Privilige Wiki
|
| 41 |
+
AcceptedAnswerId: long | null, // only present if PostTypeId=1
|
| 42 |
+
ParentId: long | null, // only present if PostTypeId=2
|
| 43 |
Score: long,
|
| 44 |
ViewCount: long | null,
|
| 45 |
Body: string | null,
|
| 46 |
+
Title: string | null,
|
| 47 |
ContentLicense: string | null,
|
| 48 |
FavoriteCount: long | null,
|
| 49 |
CreationDate: string | null,
|
| 50 |
LastActivityDate: string | null,
|
| 51 |
LastEditDate: string | null,
|
| 52 |
LastEditorUserId: long | null,
|
| 53 |
+
OwnerUserId: long | null,
|
| 54 |
Tags: array<string> | null
|
| 55 |
}
|
| 56 |
```
|
| 57 |
+
Also see , as all fields
|
| 58 |
+
have analogs in the original dump format.
|
| 59 |
|
| 60 |
## How to use?
|
| 61 |
|
|
|
|
| 151 |
- For each element of `element.select("th")`
|
| 152 |
- `${element.text()} | `
|
| 153 |
- After the loop `\n| ` is emitted
|
| 154 |
+
- For each element of `element.select("th")`
|
| 155 |
+
- For each character of the `th.text()`
|
| 156 |
+
- `-` is emitted
|
| 157 |
+
- After the loop over each character of th ` | ` is emitted
|
| 158 |
- `\n` is emitted
|
| 159 |
+
- For each element of `element.select("tr")` with more than one children of tag type `td`
|
| 160 |
- `| ` is emitted
|
| 161 |
- For each element of `element.select("td")`
|
| 162 |
- `${td.text()} | ` is emitted
|