Danish Foundation Models org
edited May 5

Description: Adding the Danish documents from NCC (https://huggingface.co/datasets/NbAiLab/NCC) to dynaword.

Datasets should be separated by source, eg NCC -> ncc_newspaper, ncc_parliament...

Filtering:

suspiciously long documents should also be examined

balsab changed pull request status to open
Danish Foundation Models org

Thanks for the PR. Here are a few improvements:

  1. Please add a description to the PR
  2. Here are some suggested code improvements

Simplify here:

-    ## load all data first to get splits, then load and filter by split
-    data = load_dataset("NbAiLab/NCC", streaming=True)
-    data_splits=list(reversed(data.keys()))
-
-     for current_split in data_splits:
-        data = load_dataset("NbAiLab/NCC", streaming=True, split=current_split)
-        data_iter = iter(data)
+    data = load_dataset("NbAiLab/NCC", streaming=True)
+
+     for split in data:
+        data_iter = iter(data[split])

Not need to do a while true loop here:

        # filtering and formatting
        while True:
            try:

You can just use a datasets map function

It is unclear what is going on here (function naming unec. class). I would refactor:

meta_data_filtering = doc_filter.first_layer_filter(current_text)
# to
streaming_dataset = streaming_dataset.map(add_fasttext_language, num_proc=4) # you don't need this is this case, but this is the idea 
streaming_dataset = streaming_dataset.filter(language_filter)
  1. I would also like some information on the filtering after the initial language filtering.
  • number of tokens
  • number of docs
  • % removed at each step

So I would probably refactor to:

streaming_dataset = streaming_dataset.filter(language_filter)

# convert to non-streaming
# convert to dynaword format
# filter one at a time

I suspect that the stopword filter might be too aggressive. Where did you get the stopword list from?

  1. Reordering the dataset into multiple datasets

I also think I would split up the corpora into:
["ncc-newspapers", "ncc-parliament", "ncc-publicreport", ...

That means that we will get different datasets for each source (now you just use "ncc"). We do not want one source to have multiple licenses. This also means that you need to have multiple datasheets.

  1. In the figure we seem to have a few REALLY long documents. I would examine some of these

  2. Language filtering pr. source

I suspect that the language labeling in some of these is wrong so it could be nice to check if Danish is a significant proportion of each split. I imagine that some are only Norwegian with a few misclassifications.

You could do this using:

samples_pr_source: dict = ... # you can define this using the default dict

def language_filter_with_desc_stats(examples):
  source = ...
  language = ...
  samples_pr_source[source][language] += 1
  
streaming_dataset = streaming_dataset.filter(language_filter_with_desc_stats, num_proc=num_proc)

# save + log desc stats
  1. I would add a log, see danske-taler for an example

I would also do some quality checking on duplicates.

Danish Foundation Models org

Looks good!

  • I would change the name to be NCC Newspapers (ncc_newspapers).

  • The overview table where you have the question mark is automatically generated, so you don't have to fill it out manually

  • codewise, there is not reason to have comments like # main or #quality check

  • the short description looks a bit odd:

Danish language subset of NCC
Source: Newspaper articles

would instead write

Danish Newspapers extracted from the Norwegian Collosal Corpus derived from OCR.

  • Generally remove backslashes from markdown
  • Remove filtering log from from the dataset readme - I would instead add a dataset filtering section (which is the list you have but written out with the numbers from the log)
  • I would pass a spell checker over the readme (e.g. "availabel"), should be a quick fix
  • You say "Document is marked as Danish", but that hides that it is done by fasttext. I would instead just write documents classified as Danish with a threshold of .. (basically combine it with the next point)
Danish Foundation Models org

Alright, just the final things before merging:

There are still some issues in the datasheets, e.g. phrasing likes:

1060 long texts (>~1e5 tokens) were found.

leads to the obvious question, why is that important? How was it checked? The filtering section in particular could use a rewrite to ensure that it easy to read and understand. You could e.g. considering doing it as a table.

You also still have to solve (newspaper):

Remove filtering log from from the dataset readme - I would instead add a dataset filtering section (which is the list you have but written out with the numbers from the log)

Can you also add the checklist from the contributor guidelines and fill it out

Danish Foundation Models org
edited 27 days ago

Actually I will just merge this with my changes and do the last fixes

Danish Foundation Models org

Merged in in the latest merge on main

KennethEnevoldsen changed pull request status to closed

Sign up or log in to comment