File size: 6,336 Bytes
189deb6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6941c3f
189deb6
 
 
 
 
 
 
 
 
 
 
 
706ead2
189deb6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
---
license: mit
datasets:
- BrewInteractive/alpaca-tr
- ituperceptron/turkish_medical_reasoning
language:
- tr
base_model:
- TURKCELL/Turkcell-LLM-7b-v1
pipeline_tag: text-generation
tags:
- chemistry
---

## Model Card: Turkish Chatbot

**Model Name:** E-Model-V2

**Developer:** ERENALP ÇETİNTÜRK

**Contact:** [email protected]

**License:** MIT

### 1. Model Description
This model is a fine-tuned version of the TURKCELL/Turkcell-LLM-7b-v1 , specifically optimized for casual conversation in Turkish. 
It has been trained for twice as long as its predecessor, resulting in enhanced conversational abilities and a deeper understanding of Turkish language nuances. 
Designed to engage users in natural, coherent, and enjoyable interactions, this chatbot is ideal for everyday conversations. 

*   **Model Type:** Llama (fine-tuned)
*   **Language(s):** Turkish
*   **Finetuned from model:** TURKCELL/Turkcell-LLM-7b-v1

### 2. Intended Use

This model is intended for casual conversation and entertainment purposes. 
It can be used to create a chatbot for personal use or as a component in a 
larger application where Turkish language interaction is required.  It is 
*not* intended for use in critical applications such as healthcare, 
finance, or legal advice.

### 3. Factors

*   **Domain:** General conversation
*   **User Demographics:**  No specific demographic targeting.
*   **Input Length:**  The model is designed to handle relatively short 
input sequences.  Longer inputs may lead to degraded performance.

### 4. Bias, Risks, and Limitations

*   **Bias:** The model may exhibit biases present in the training data.  
This could manifest as stereotypical responses or unequal treatment of 
different topics.
*   **Hallucinations:** The model may generate factually incorrect or 
nonsensical responses.
*   **Safety:** The model may generate inappropriate or offensive content, 
although efforts have been made to mitigate this risk.
*   **Limited Knowledge:** The model's knowledge is limited to the data it 
was trained on. It may not be able to answer questions about current 
events or specialized topics.
*   **Turkish Specificity:** The model is specifically trained for Turkish 
and will not perform well with other languages.

### 5. Training Details

#### Training Data

The model was fine-tuned on a combination of the following datasets:

* BrewInteractive/alpaca-tr
* ituperceptron/turkish_medical_reasoning

#### Training Procedure

*   **Training Regime:** Fine-tuning
*   **Hyperparameters:**
    *   Learning Rate: 2e-5
    *   Batch Size: 13135
    *   Epochs: 2
    *   Optimizer: AdamW
*   **Preprocessing:** The training data was preprocessed by tokenizers.

### 6. How to Use the Model (Inference Code)

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"

# Load the merged fine-tuned model and tokenizer
model_dir = "E-Model-V2"
model = AutoModelForCausalLM.from_pretrained(
    model_dir,
    torch_dtype=torch.float16,  # Use FP16 for memory efficiency
    device_map="auto"           # Automatically map to GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_dir)

# Ensure EOS token is set correctly
eos_token = tokenizer("<|im_end|>", add_special_tokens=False)["input_ids"][0]
if tokenizer.eos_token_id is None:
    tokenizer.eos_token_id = eos_token

# Move model to device (if not already mapped)
model.to(device)

# System prompt
system_prompt = """E Model, Türkçe odaklı etik yapay zeka asistanıdır. Küfür, hakaret, ayrımcılık, yasa dışı içerik veya kişisel mahremiyet ihlali kesinlikle yapılmaz. Türk dilbilgisi, kültürel bağlam ve yasal standartlar hassasiyetle uygulanır. Model, tıbbi/hukuki/finansal danışmanlık, gerçek zamanlı veriler veya uzun mantık zincirleri gerektiren görevlerde sınırlıdır. Hassas bilgi paylaşımı önerilmez, kritik kararlarda insan uzmanı görüşü zorunludur. Anlamadığı konularda açıkça belirtir, geri bildirimlerle sürekli iyileştirilir. Eğitim verileri metin tabanlıdır, güncel olayları takip edemez. Yanlış yanıt riski olduğunda bağımsız doğrulama tavsiye edilir. Ticari kullanım ve hassas konular önceden izne tabidir. Tüm etkileşimler, modelin yeteneklerini aşmayacak ve toplumsal değerleri koruyacak şekilde yapılandırılır."""

# Chatbot loop
print("Merhaba! Size nasıl yardımcı olabilirim? (Çıkmak için 'çık' yazın)")
conversation_history = [{"role": "system", "content": system_prompt}]  # Initialize with system prompt

while True:
    # Get user input
    user_input = input("Siz: ")
    
    # Exit condition
    if user_input.lower() == "çık":
        print("Görüşmek üzere!")
        break
    
    # Add user input to conversation history
    conversation_history.append({"role": "user", "content": user_input})
    
    # Tokenize the conversation history
    encodeds = tokenizer.apply_chat_template(conversation_history, return_tensors="pt")
    model_inputs = encodeds.to(device)
    
    # Generate response
    generated_ids = model.generate(
        model_inputs,
        max_new_tokens=1024,
        do_sample=True,
        eos_token_id=eos_token,
        temperature=0.7,
        top_p=0.95
    )
    
    # Decode the response
    generated_text = tokenizer.decode(generated_ids[0][model_inputs.shape[1]:], skip_special_tokens=True)
    
    # Add assistant response to history
    conversation_history.append({"role": "assistant", "content": generated_text})
    
    # Print the response
    print(f"Asistan: {generated_text}")

# Optional: Clear memory when done
del model
torch.cuda.empty_cache() 
```

### 9. Ethical Considerations

*   **Responsible Use:** This model should be used responsibly and 
ethically.
*   **Transparency:** Users should be informed that they are interacting 
with an AI chatbot.
*   **Bias Mitigation:** Efforts should be made to mitigate bias in the 
model's responses.

### 10. Limitations and Future Work

*   **Context Length:** The model has a limited context length, which may 
affect its ability to handle long conversations.
*   **Knowledge Updates:** The model's knowledge is static and needs to be 
updated periodically.
*   **Future Work:** Future work could focus on improving the model's 
context length, knowledge updates, and bias mitigation.