This repository contains a fastText pretraining data filter targeting the LAMBADA task, as discussed in the paper Improving Pretraining Data Using Perplexity Correlations. This filter selects high-quality pretraining data based on correlations between LLM perplexity and downstream benchmark performance.

Code: https://github.com/TristanThrush/perplexity-correlations

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support