Essential-Web v1.0
Collection
10 items
β’
Updated
β’
3
π Website | π₯οΈ Code | π Paper
EAI-Distill-0.5b is a fine-tuned version of Qwen2.5-0.5B-Instruct designed for document classification across 12 taxonomic categories. This model is optimized for high-throughput classification of web documents and produces structured metadata for large-scale dataset curation.
The model classifies documents across the following dimensions:
The model achieves an average Cohen's ΞΊ agreement of 0.71-0.74 with our golden annotators, GPT-4o and Claude 3.5 Sonnet, on held-out evaluation sets, which is within 3% of its teacher model Qwen2.5-32b-Instruct while being 64Γ smaller.
from transformers import AutoTokenizer, AutoModelForCausalLM
import random
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("EssentialAI/EAI-Distill-0.5b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("EssentialAI/EAI-Distill-0.5b")
def chunk_text(text, max_char_per_doc=30000):
if len(text) <= max_char_per_doc:
return text
chunk_size = max_char_per_doc // 3
start = text[:chunk_size]
middle_start = chunk_size
middle_end = len(text) - chunk_size
mid_point = random.randint(middle_start + chunk_size//2, middle_end - chunk_size//2)
middle = text[mid_point - chunk_size//2:mid_point + chunk_size//2]
end = text[-chunk_size:]
return f"[beginning]\n{start}\n[middle]\n{middle}\n[end]\n{end}"
def classify_document(text):
chunked_text = chunk_text(text)
messages = [
{"role": "system", "content": "taxonomy"},
{"role": "user", "content": chunked_text},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example usage
document_text = "Your document content here..."
classification = classify_document(document_text)
print(classification)
The model outputs classifications in a condensed format:
{FDC primary},{FDC secondary or skip}
{Bloom cognitive process primary (1-6)},{Bloom cognitive process secondary (1-6) or skip}
{Bloom knowledge domain primary (1-4)},{Bloom knowledge domain secondary (1-4) or skip}
{Document type v1 primary (1-17)},{Document type v1 secondary (1-17) or skip}
{Extraction artifacts primary (0-4)},{Extraction artifacts secondary (0-4) or skip}
{Missing content primary (0-6)},{Missing content secondary (0-6) or skip}
{Document type v2 primary (1-25)},{Document type v2 secondary (1-25) or skip}
{Reasoning depth primary (1-6)},{Reasoning depth secondary (1-6) or skip}
{Technical correctness primary (1-6)},{Technical correctness secondary (1-6) or skip}
{Educational level primary (1-5)},{Educational level secondary (1-5) or skip}
This model is designed for:
If you use this model, please cite:
@misc{ai2025essentialwebv1024ttokens,
title={Essential-Web v1.0: 24T tokens of organized web data},
author={Essential AI and : and Andrew Hojel and Michael Pust and Tim Romanski and Yash Vanjani and Ritvik Kapila and Mohit Parmar and Adarsh Chaluvaraju and Alok Tripathy and Anil Thomas and Ashish Tanwer and Darsh J Shah and Ishaan Shah and Karl Stratos and Khoi Nguyen and Kurt Smith and Michael Callahan and Peter Rushton and Philip Monk and Platon Mazarakis and Saad Jamal and Saurabh Srivastava and Somanshu Singla and Ashish Vaswani},
year={2025},
eprint={2506.14111},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.14111},
}