Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:

Questions on Temporal Deduplication and Proposed Strategy in Dataset Construction

#52
by AshleyLL - opened

I hope you're all doing well. I'm reaching out to better understand some aspects of the Dolma dataset, especially regarding deduplication strategies. Any insights from the community would be greatly appreciated.

  1. Temporal Deduplication
    Does Dolma perform deduplication across different time points? For example:
    If a webpage with the same URL (or near-identical content) appears in multiple years (e.g., 2013, 2014, 2015), is it deduplicated?
    If so:
    Which version is retained by default — earliest, latest, or highest-quality snapshot?
    Are there specific tools or metrics used to evaluate content equivalence over time?

  2. Proposed Deduplication Strategy
    We’re considering implementing a deduplication strategy based on the following rules:
    (1) Default Rule: Retain only the first occurrence of a webpage.
    (2) Exception Rule: Keep a subsequent crawl if:

  • The content has undergone significant modification (e.g., expanded depth/breadth), or
  • The new version is of higher quality.

Could the community share thoughts on this approach?
Does this align with best practices?
Are there known pitfalls or alternative strategies we should consider?
Has something like this already been implemented or is it under development in Dolma?
If not, are there recommended tools or toolchains for implementing such a strategy at scale?
How can one define "significant content difference" or "higher quality"? E.g.:

  • Semantic similarity thresholds?
  • Content length or structural changes?
  • Quality heuristics (e.g., readability, domain authority)?

Thank you all very much for your time and contributions to this project. Looking forward to hearing your thoughts!

Sign up or log in to comment