SmolLM2-135M-Eagle
SmolLM2-135M-Eagle is a fine-tuned version of the SmolLM2-135M model on the EagleSFT dataset, designed to improve the model's capabilities in both Russian and English language tasks.
GGUF version of this model is available at: SmolLM2-135M-Eagle-GGUF
Model Description
SmolLM2-135M-Eagle is a lightweight language model that has been fine-tuned specifically to handle bilingual content. This fine-tuning extends the base model's capabilities to better understand and generate content in Russian while maintaining its English competency.
Base Model
The model is built upon SmolLM2-135M, a compact language model with 135 million parameters that offers a good balance between performance and resource requirements.
Fine-tuning Details
Dataset
The model was fine-tuned on the EagleSFT dataset, which contains 536,231 pairs of human questions and machine-generated responses in both Russian and English languages. The dataset primarily focuses on educational content but also includes everyday questions and casual conversations.
Environmental Impact
- Training duration: 26h 26m total
- 15h 19m 52s in Tyumen, Russia (300W power consumption)
- 11h 6m 8s in Saint-Petersburg (360W power consumption)
- Hardware: 1 x RTX 4090
- Carbon emissions: Approximately 3.11 kg CO2eq
- Calculated based on average power consumption and average CO2eq/kWh (350g) in these regions
- Tyumen: 300W * 15.33h * 350g/kWh = 1.61 kg CO2eq
- Saint-Petersburg: 360W * 11.10h * 350g/kWh = 1.50 kg CO2eq
Training Parameters
- Training approach: Supervised Fine-Tuning (SFT)
- Training epochs: 2
- Learning rate: 3.0e-04
- Precision: bfloat16
Limitations and Capabilities
It's important to note that this model was not pre-trained but only underwent SFT on a relatively small number of tokens. This means that the model has a limited amount of data to rely on when answering in Russian compared to its English capabilities.
Despite extensive limitations, the model shows minimal improvement in:
- Basic recognition of Russian prompts (though with frequent misunderstandings)
- Handling simple tasks formatted as "{question in Russian}, answer in English"
- Basic translation from Russian to English (though quality remains poor)
The model's minimal understanding of Russian language comes solely from the supervised fine-tuning process without any proper pre-training with Russian text corpus, resulting in severely limited capabilities.
Experimental Capabilities
The model demonstrates some experimental capabilities, but with significant limitations:
- Basic Russian text understanding (with frequent errors and misinterpretations)
- Limited question answering in Russian (quality significantly lower than English)
- Basic Russian to English translation (better than English to Russian)
Limitations
- NOT SUITABLE FOR PRODUCTION USE: This model should not be used in production environments in any form
- Extremely limited knowledge base for Russian language due to lack of pre-training with Russian text
- Unoptimized tokenizer performance for Russian language results in inefficient token usage
- Output quality in Russian will be unsatisfactory for most use cases
- May produce inaccurate, inconsistent, or inappropriate responses, especially in Russian
- All limitations of the base SmolLM2-135M model still apply
- Downloads last month
- 17