Datasets:
Tasks:
Text Generation
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
10M - 100M
ArXiv:
DOI:
License:
Update wiki, wikibooks, wikisource
#87
by
kris927b
- opened
[v1.2.10] - 2025-08-18
Changed
- Updated the wiki, wikibooks, wikisource datasets.
Added
- Added
create.py
for wiki, wikibooks, wikisource.
kris927b
changed pull request status to
open
comments
- Should we add a lock file based on the script dependencies to ensure that we can reproduce it in the future? (would love to set up CI to just automatically rerun these in the future)
- Do we prefer it as text or as markdown? (It seems like just a text strips a lot of the desired formatting)
- Can I ask you to fix my mistake in average document length (which seems like it needs only to have 2 decimals.
- What do we do with links + images - is that something that we want to keep?
- There are a few places where you write to stdout (or use print) instead of logging. Any reason for that? (For the other scripts, I have saved the logs, which have been quite useful for debugging later on.)
- Anything wrong with the add_token from dynaword since you implement your own? (seems like yours is faster - should we just update the other one?)
- I would love some more documentation on the processing (why did we choose wtf,
- Should we add some sort of "last updated" date to the readme (should be possible to create from the git info during the
update_descriptive_stats.py
(We can do this in a separate issue). - was considering renaming wiki to wikipedia, this might be a good time to do it?
Size changes:
wikisource: 5.34 > 6.28 (17.6%)
wikibooks: 6.24 >7.36 (17.9%)
wikipedia: 122.0 > 173.33 (41.8%)
Those are pretty good stats for an update :)
Totally, it is of course 0.05B, but I it is great to have a way to keep the dataset up to date.
@robvanderg , I am wondering if this solved some of the issues you had with the links? In general, it might be nice to align some of the processing steps between this one and wikicomments (I am not at all sure which approach gives the best outcome - would be lovely to test this in the future)
Thanks for the feedback. I just made some alterations to the code:
- Should we add a lock file based on the script dependencies to ensure that we can reproduce it in the future? (would love to set up CI to just automatically rerun these in the future)
- I think this is a good idea. We could in a separate PR just add version numbers to all
create.py
scripts. That I think should cover it?
- I think this is a good idea. We could in a separate PR just add version numbers to all
- Do we prefer it as text or as markdown? (It seems like just a text strips a lot of the desired formatting)
- Just text removes formatting, but I just looked into what
wtf_wikipedia
are able to do with MD, and I don't think it will be very good. But I am open for other suggestions
- Just text removes formatting, but I just looked into what
- Can I ask you to fix my mistake in average document length (which seems like it needs only to have 2 decimals.
- Fixed
- What do we do with links + images - is that something that we want to keep?
- Right now we don't use it for anything. But here at ALX we are looking into using the links to expand the articles into longer documents that can be used for extending context windows when training. But that will be a separate dataset/project.
- There are a few places where you write to stdout (or use print) instead of logging. Any reason for that? (For the other scripts, I have saved the logs, which have been quite useful for debugging later on.)
- Changed to logging.
- Anything wrong with the add_token from dynaword since you implement your own? (seems like yours is faster - should we just update the other one?)
- Updated the one within dynaword
- I would love some more documentation on the processing (why did we choose wtf,
- I chose it because out of the other parsers I tested this was the imperically best one. I tested mwparserfromhell, mediawiki_dump, wikiextractor, and wtf_wikipedia. It seemed that the others still produced some sort of artifacts from the parsing of wikicode.
- Should we add some sort of "last updated" date to the readme (should be possible to create from the git info during the update_descriptive_stats.py (We can do this in a separate issue).
- Could be cool. But yeah in a separate issue.
- was considering renaming wiki to wikipedia, this might be a good time to do it?
- Renamed.
kris927b
changed pull request status to
merged