How aggressive was the filtering?
Filtering out code, math, STEM and medical can obviously be useful when making domain specific models; however, it makes me wonder how aggressively the full database was filtered for quality and desirable domains.
The primary reason I ask is that ever since Llama 3 70b and its size appropriate SimpleQA score of 20 OS models started GROSSLY overfitting a handful of domains, specifically the ones you singled out (coding, math and STEM). And as a consequence their performance tanked across a wide variety of tasks (e.g. poems, humor...) and extremely popular domains of knowledge (e.g. movies, music, games...).
A perfect example is Qwen3 235b, which despite its massive size has a SimpleQA score of only 11. And in my testing it's so grossly overfit to math, coding, STEM... that it's less than useless to >95% of the general English speaking population that aren't science obsessed coders.
So I guess my question is how much lowbrow popular knowledge (e.g. pop culture) was retained from sources like Common Crawl?
Because it would be nice to see some post Llama 3 non-code/math/STEM overfit OS models with massive sizes and ~90 English MMLU scores that score above ~10 on the English SimpleQA (with most of those points coming from MMLU overlapping domains of knowledge). And it's not just about the lack of knowledge and nuance across humanity's most popular domains. Said overfit models reliably can't write even primitive poems, produce or detect even shallow humor, and so on.
Anyways, it would be nice if the OS community could once again start democratically training on humanity's most popular and valued information in order to produce balanced general purpose AI models instead of trying to maximize scores on math, coding, and STEM benchmarks. Regardless, thanks for creating such as massive 24 trillion token corpus.