Update README.md
Browse files
README.md
CHANGED
@@ -11,16 +11,16 @@ size_categories:
|
|
11 |
- 100K<n<1M
|
12 |
---
|
13 |
|
14 |
-
Here are
|
15 |
|
16 |
## Scraping details
|
17 |
-
Most forums
|
18 |
|
19 |
1. Scan forum pages one by one, retrieve thread links and visit the links one by one;
|
20 |
2. Retrieve the thread page HTML, look for the "next page" link, navigate to that;
|
21 |
3. Repeat step 2 in a cyclic fashion.
|
22 |
|
23 |
-
For the process to work
|
24 |
|
25 |
Using Python (pandas) on the exported data, pages from the same threads were grouped together and concatenated with the string `\n<!-- [NEW PAGE STARTS BELOW] -->\n`, without any further processing, obtaining one row of data per thread. The records were then saved into Parquet files (`compression='zstd', compression_level=7, row_group_size=20`).
|
26 |
|
|
|
11 |
- 100K<n<1M
|
12 |
---
|
13 |
|
14 |
+
Here are _mostly_ **original/raw files** for some of the forums I scraped in the past (and some newly scraped ones), **repacked as HTML strings + some metadata on a one-row-per-thread basis** instead of a _one-row-per-message basis_, which should make them more convenient to handle. Unlike [the other archive](https://huggingface.co/datasets/lemonilia/Roleplay-Forums_2023-04), they shouldn't have issues with spaces between adjacent HTML tags, as that occurred by mistake in an intermediate processing step where single messages were extracted from the pages. Unfortunately, I do not have anymore the original files for most of the forums scraped in 2023; those would need to be scraped again.
|
15 |
|
16 |
## Scraping details
|
17 |
+
Most forums have been scraped page-by-page using the Firefox extension [Web Scraper](https://addons.mozilla.org/en-US/firefox/addon/web-scraper/), generally only picking the top-level thread message container instead of the entire page. The scraper was configured to:
|
18 |
|
19 |
1. Scan forum pages one by one, retrieve thread links and visit the links one by one;
|
20 |
2. Retrieve the thread page HTML, look for the "next page" link, navigate to that;
|
21 |
3. Repeat step 2 in a cyclic fashion.
|
22 |
|
23 |
+
For the process to work successfully, it was important not to scrape too many threads per run and using Firefox instead of Chrome, otherwise it would easily fail. Newer versions of Web Scraper solved some reliability issues with Firefox and made exporting the data to `.csv` format much quicker.
|
24 |
|
25 |
Using Python (pandas) on the exported data, pages from the same threads were grouped together and concatenated with the string `\n<!-- [NEW PAGE STARTS BELOW] -->\n`, without any further processing, obtaining one row of data per thread. The records were then saved into Parquet files (`compression='zstd', compression_level=7, row_group_size=20`).
|
26 |
|