gardari commited on
Commit
7c91bbf
·
verified ·
1 Parent(s): e6bbb3c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -3
README.md CHANGED
@@ -1,3 +1,26 @@
1
- ---
2
- license: unknown
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: unknown
3
+ language:
4
+ - is
5
+ pretty_name: IC3-v2
6
+ size_categories:
7
+ - 1B<n<10B
8
+ ---
9
+
10
+ # Dataset Card for IC3-v2
11
+
12
+ <!-- Provide a quick summary of the dataset. -->
13
+ The Icelandic Clean Crawled Corpus v2 (IC3-v2) is a collection of quality-filtered plaintext documents in Icelandic extracted from scraped websites with the `.is` top-level domain (TLD) in Common Crawl dumps between the years 2013 and 2023.
14
+ The corpus contains about 1.3 billion words across almost 4 million documents.
15
+
16
+ ## Dataset Details
17
+ We extract all WARC records matching the `.is` TLD from all available Common Crawl dumps as of the end of year 2023. Using a manually curated blacklist of domain names, we
18
+ remove records from websites with gambling, pornography, and other illegal or harmful content. We extract the plaintext from the corresponding raw HTML
19
+ code using `trafilatura` (with `jusText` as fallback). During the extraction process, `trafilatura` uses heuristics to provide fields such as `title`, `author` and `tags`,
20
+ which we include as document-level metadata. We then apply various hand-crafted quality filters, similar to the Gopher rules and FineWeb filters, on the extracted
21
+ output in an attempt at excluding low-quality documents like SEO product pages, documents with a high number of repeated sentences, etc. On the high-quality documents,
22
+ we then run FAIR's `fasttext` language identification model and only keep documents with high proportion of Icelandic text. Finally, we run deduplication on the corpus.
23
+ This we do by both running a sliding window of three sentences across each document and removing the span if it was previously seen in any document before, and also by
24
+ performing document-wise exact string-matching deduplication across the corpus.
25
+
26
+ Please note that even though the documents have been URL-filtered for harmful content on a best-effort basis, some may still remain.