Datasets:
Clarification on data deletion rules
I'm trying to understand how certain data is removed.
For example, for Fra_Latn_removed
- id: urn:uuid:2867ede1-5c90-424c-8b1a-1b5695dc0040
- url: http://archives.cg58.fr/?id=partager
- filter_reason: duplicated_6_n_grams
Where are the rules for such deletions defined?
Is there a specific part of the codebase (e.g., in Python) where these filtering criteria like duplicated_6_n_grams
are implemented?
Any pointers to documentation or relevant source code would be appreciated!
Okay, I figured it out. The removal rules are defined in the filter steps of FineWeb 2's Python pipeline code (implemented via classes like GopherRepetitionFilter
) and are configurable via YAML files in the repository (under the configs
/ directory, e.g., configs/fra_Latn.yml). When a document (e.g., a French document) is flagged for containing too many duplicate 6-grams, the filter (defined in the French configuration) flags it with a reason (e.g., "duplicated_6_n_grams"
) and the document is stored separately ( (it is suffixed by _removed
).