Safetensors
qwen2

🏷️ EAI-Distill-0.5b

πŸ† Website | πŸ–₯️ Code | πŸ“– Paper

πŸ“‹ Model Description

EAI-Distill-0.5b is a fine-tuned version of Qwen2.5-0.5B-Instruct designed for document classification across 12 taxonomic categories. This model is optimized for high-throughput classification of web documents and produces structured metadata for large-scale dataset curation.

The model classifies documents across the following dimensions:

  • πŸ“š Free Decimal Correspondence (FDC): Subject matter classification based on the Dewey Decimal System
  • 🧠 Bloom's Taxonomy: Cognitive process (Remember/Understand/Apply/Analyze/Evaluate/Create) and knowledge domain (Factual/Conceptual/Procedural/Metacognitive)
  • πŸ“„ Document Type: Web page categorization (News, Academic, Reference, Code, Social, etc.)
  • πŸ” Content Quality: Extraction artifacts, missing content detection
  • πŸŽ“ Educational Metadata: Reasoning depth, technical correctness, educational level

πŸš€ Training Details

  • πŸ€– Base Model: Qwen2.5-0.5B-Instruct
  • πŸ“Š Training Data: 82B synthetic tokens generated by Qwen2.5-32B-Instruct (teacher model) on 104M Common Crawl documents
  • βš™οΈ Optimizer: AdamW (β₁=0.9, Ξ²β‚‚=0.95, weight_decay=0.1)
  • πŸ“ˆ Learning Rate: 1Γ—10⁻⁴ with linear warmup (2B tokens), cosine decay to 1Γ—10⁻⁡, then linear anneal to 0
  • πŸ“¦ Batch Size: 2M tokens
  • πŸ“ Sequence Length: 16,384 tokens
  • πŸ’» Hardware: Trained on AMD MI300x GPUs

πŸ“Š Performance

The model achieves an average Cohen's ΞΊ agreement of 0.71-0.74 with our golden annotators, GPT-4o and Claude 3.5 Sonnet, on held-out evaluation sets, which is within 3% of its teacher model Qwen2.5-32b-Instruct while being 64Γ— smaller.

πŸ’» Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import random

# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("EssentialAI/EAI-Distill-0.5b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("EssentialAI/EAI-Distill-0.5b")

def chunk_text(text, max_char_per_doc=30000):
    if len(text) <= max_char_per_doc:
        return text
        
    chunk_size = max_char_per_doc // 3
    start = text[:chunk_size]
    
    middle_start = chunk_size 
    middle_end = len(text) - chunk_size 
    
    mid_point = random.randint(middle_start + chunk_size//2, middle_end - chunk_size//2)
    
    middle = text[mid_point - chunk_size//2:mid_point + chunk_size//2]
    end = text[-chunk_size:]
    return f"[beginning]\n{start}\n[middle]\n{middle}\n[end]\n{end}"

def classify_document(text):
    chunked_text = chunk_text(text)
    
    messages = [
        {"role": "system", "content": "taxonomy"},
        {"role": "user", "content": chunked_text},
    ]
    
    prompt = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=True
    )
    
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_new_tokens=100)
    return tokenizer.decode(outputs[0], skip_special_tokens=True)

# Example usage
document_text = "Your document content here..."
classification = classify_document(document_text)
print(classification)

πŸ“€ Output Format

The model outputs classifications in a condensed format:

{FDC primary},{FDC secondary or skip}
{Bloom cognitive process primary (1-6)},{Bloom cognitive process secondary (1-6) or skip}
{Bloom knowledge domain primary (1-4)},{Bloom knowledge domain secondary (1-4) or skip}
{Document type v1 primary (1-17)},{Document type v1 secondary (1-17) or skip}
{Extraction artifacts primary (0-4)},{Extraction artifacts secondary (0-4) or skip}
{Missing content primary (0-6)},{Missing content secondary (0-6) or skip}
{Document type v2 primary (1-25)},{Document type v2 secondary (1-25) or skip}
{Reasoning depth primary (1-6)},{Reasoning depth secondary (1-6) or skip}
{Technical correctness primary (1-6)},{Technical correctness secondary (1-6) or skip}
{Educational level primary (1-5)},{Educational level secondary (1-5) or skip}

🎯 Intended Use

This model is designed for:

  • πŸ—οΈ Large-scale web document classification and metadata generation
  • πŸ”§ Dataset curation through taxonomic filtering
  • βœ… Content quality assessment for training data preparation
  • πŸ“š Educational content analysis and organization

⚠️ Limitations

  • Optimized for English web documents extracted using resiliparse
  • Documents over 30k characters are automatically chunked, which may affect classification accuracy
  • Performance may vary on content significantly different from Common Crawl web data
  • Classification categories are based on web content patterns and may not generalize to other document types

πŸ“ Citation

If you use this model, please cite:

@misc{ai2025essentialwebv1024ttokens,
      title={Essential-Web v1.0: 24T tokens of organized web data}, 
      author={Essential AI and : and Andrew Hojel and Michael Pust and Tim Romanski and Yash Vanjani and Ritvik Kapila and Mohit Parmar and Adarsh Chaluvaraju and Alok Tripathy and Anil Thomas and Ashish Tanwer and Darsh J Shah and Ishaan Shah and Karl Stratos and Khoi Nguyen and Kurt Smith and Michael Callahan and Peter Rushton and Philip Monk and Platon Mazarakis and Saad Jamal and Saurabh Srivastava and Somanshu Singla and Ashish Vaswani},
      year={2025},
      eprint={2506.14111},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2506.14111}, 
}
Downloads last month
33
Safetensors
Model size
630M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for EssentialAI/eai-distill-0.5b

Quantizations
1 model

Collection including EssentialAI/eai-distill-0.5b