File size: 4,665 Bytes
20b63ef 6c56080 5c7ace9 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 5c7ace9 6c56080 5c7ace9 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 20b63ef 6c56080 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
base_model: unsloth/SmolLM2-1.7B
library_name: peft
license: apache-2.0
language:
- tr
metrics:
- name: ROUGE-1
type: rouge
value: 0.2439
- name: ROUGE-2
type: rouge
value: 0.1303
- name: ROUGE-L
type: rouge
value: 0.2147
- name: BLEU
type: bleu
value: 0.0406
- name: METEOR
type: meteor
value: 0.2262
- name: BERTScore Precision
type: bertscore
value: 0.5286
- name: BERTScore Recall
type: bertscore
value: 0.5834
- name: BERTScore F1
type: bertscore
value: 0.553
---
+---------------------+--------+
| Metrik | Değer |
+---------------------+--------+
| ROUGE-1 | 0.2439 |
| ROUGE-2 | 0.1303 |
| ROUGE-L | 0.2147 |
| BLEU | 0.0406 |
| METEOR | 0.2262 |
| BERTScore Precision | 0.5286 |
| BERTScore Recall | 0.5834 |
| BERTScore F1 | 0.553 |
+---------------------+--------+
✅ Model evaluation is complete and all results are logged to `wandb`.
---
# Model Card for SmolLM2-Ziraat-Turkish-v1
<!-- Türkçe versiyonu aşağıda yer almaktadır. -->
## 🧠 Model Summary
**SmolLM2-Ziraat-Turkish-v1** is a fine-tuned version of the [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) model, trained using [Unsloth](https://github.com/unslothai/unsloth) and PEFT (Parameter-Efficient Fine-Tuning). This model has been tailored for Turkish language tasks with a focus on agriculture, finance, and general-purpose conversation.
## 🇹🇷 Model Özeti
**SmolLM2-Ziraat-Turkish-v1**, [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) tabanlı bir model olup, [Unsloth](https://github.com/unslothai/unsloth) ve PEFT (Parameter-Efficient Fine-Tuning) yöntemleriyle Türkçe diline yönelik olarak eğitilmiştir. Tarım, finans ve genel sohbet amaçlı kullanım senaryoları için optimize edilmiştir.
---
## 🔍 Model Details / Model Detayları
- **Developed by / Geliştiren:** [hosmankarabulut](https://huggingface.co/hosmankarabulut)
- **Model type / Model türü:** Causal Language Model (AutoRegressive)
- **Language / Dil:** Turkish (Türkçe)
- **License / Lisans:** apache-2.0
- **Fine-tuned with / Eğitim Aracı:** [Unsloth](https://github.com/unslothai/unsloth) + PEFT
- **Base model / Taban model:** [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B)
---
## 🔗 Sources / Kaynaklar
- **Model Repository / Model Deposu:** [https://huggingface.co/hosmankarabulut/SmolLM2-Ziraat-Turkish-v1](https://huggingface.co/hosmankarabulut/SmolLM2-Ziraat-Turkish-v1)
- **Base model / Taban model:** [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B)
---
## ✅ Intended Uses / Amaçlanan Kullanım
- Turkish chatbots, Q&A systems
- Agricultural and financial assistants
- General-purpose Turkish LLMs
---
## 🚫 Out-of-Scope Use / Uygun Olmayan Kullanım
- Medical, legal, or high-risk decision-making
- Misinformation or unethical applications
---
## ⚠️ Bias, Risks and Limitations / Önyargılar, Riskler ve Sınırlamalar
Model may still contain biases inherited from the base model. Performance is best within Turkish language and domain-specific contexts (agriculture, finance).
---
## 🧪 Training & Evaluation / Eğitim ve Değerlendirme
- **Training Library / Eğitim Kütüphanesi:** [Unsloth](https://github.com/unslothai/unsloth)
- **Hardware Used / Kullanılan Donanım:** RTX 3090
- **Precision:** bf16 (mixed precision)
- **Dataset:** Özel Türkçe veriseti (tarım odaklı)
- **Evaluation Tool:** `wandb` (Weights & Biases)
### 📊 Evaluation Results / Değerlendirme Sonuçları
| Metric | Value |
|----------------------|--------|
| ROUGE-1 | 0.2439 |
| ROUGE-2 | 0.1303 |
| ROUGE-L | 0.2147 |
| BLEU | 0.0406 |
| METEOR | 0.2262 |
| BERTScore Precision | 0.5286 |
| BERTScore Recall | 0.5834 |
| BERTScore F1 | 0.553 |
✅ Tüm metrikler başarıyla hesaplandı ve `wandb` üzerinde kaydedildi.
---
## 💡 Quickstart / Hızlı Başlangıç
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("hosmankarabulut/SmolLM2-Ziraat-Turkish-v1", torch_dtype=torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained("hosmankarabulut/SmolLM2-Ziraat-Turkish-v1")
inputs = tokenizer("Türkiye'de tarım politikaları hakkında ne düşünüyorsun?", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |