Model Card: Turkish Chatbot
Model Name: E-Model-V1
Developer: ERENALP ÇETİNTÜRK
Contact: [email protected]
License: MIT
1. Model Description
This model is a Turkish language chatbot fine-tuned from the TURKCELL/Turkcell-LLM-7b-v1 model. It is designed for casual conversation in Turkish. The model aims to provide engaging and coherent responses to user inputs.
- Model Type: Llama (fine-tuned)
- Language(s): Turkish
- Finetuned from model: TURKCELL/Turkcell-LLM-7b-v1
2. Intended Use
This model is intended for casual conversation and entertainment purposes. It can be used to create a chatbot for personal use or as a component in a larger application where Turkish language interaction is required. It is not intended for use in critical applications such as healthcare, finance, or legal advice.
3. Factors
- Domain: General conversation
- User Demographics: No specific demographic targeting.
- Input Length: The model is designed to handle relatively short input sequences. Longer inputs may lead to degraded performance.
4. Bias, Risks, and Limitations
- Bias: The model may exhibit biases present in the training data.
This could manifest as stereotypical responses or unequal treatment of different topics. - Hallucinations: The model may generate factually incorrect or nonsensical responses.
- Safety: The model may generate inappropriate or offensive content, although efforts have been made to mitigate this risk.
- Limited Knowledge: The model's knowledge is limited to the data it was trained on. It may not be able to answer questions about current events or specialized topics.
- Turkish Specificity: The model is specifically trained for Turkish and will not perform well with other languages.
5. Training Details
Training Data
The model was fine-tuned on a combination of the following datasets:
- BrewInteractive/alpaca-tr
- ituperceptron/turkish_medical_reasoning
Training Procedure
- Training Regime: Fine-tuning
- Hyperparameters:
- Learning Rate: 2e-5
- Batch Size: 13135
- Epochs: 1
- Optimizer: AdamW
- Preprocessing: The training data was preprocessed by tokenizers.
6. How to Use the Model (Inference Code)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda"
# Load the merged fine-tuned model and tokenizer
model_dir = "E-Model-V1"
model = AutoModelForCausalLM.from_pretrained(
model_dir,
torch_dtype=torch.float16, # Use FP16 for memory efficiency
device_map="auto" # Automatically map to GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_dir)
# Ensure EOS token is set correctly
eos_token = tokenizer("<|im_end|>", add_special_tokens=False)["input_ids"][0]
if tokenizer.eos_token_id is None:
tokenizer.eos_token_id = eos_token
# Move model to device (if not already mapped)
model.to(device)
# System prompt
system_prompt = """E Model, Türkçe odaklı etik yapay zeka asistanıdır. Küfür, hakaret, ayrımcılık, yasa dışı içerik veya kişisel mahremiyet ihlali kesinlikle yapılmaz. Türk dilbilgisi, kültürel bağlam ve yasal standartlar hassasiyetle uygulanır. Model, tıbbi/hukuki/finansal danışmanlık, gerçek zamanlı veriler veya uzun mantık zincirleri gerektiren görevlerde sınırlıdır. Hassas bilgi paylaşımı önerilmez, kritik kararlarda insan uzmanı görüşü zorunludur. Anlamadığı konularda açıkça belirtir, geri bildirimlerle sürekli iyileştirilir. Eğitim verileri metin tabanlıdır, güncel olayları takip edemez. Yanlış yanıt riski olduğunda bağımsız doğrulama tavsiye edilir. Ticari kullanım ve hassas konular önceden izne tabidir. Tüm etkileşimler, modelin yeteneklerini aşmayacak ve toplumsal değerleri koruyacak şekilde yapılandırılır."""
# Chatbot loop
print("Merhaba! Size nasıl yardımcı olabilirim? (Çıkmak için 'çık' yazın)")
conversation_history = [{"role": "system", "content": system_prompt}] # Initialize with system prompt
while True:
# Get user input
user_input = input("Siz: ")
# Exit condition
if user_input.lower() == "çık":
print("Görüşmek üzere!")
break
# Add user input to conversation history
conversation_history.append({"role": "user", "content": user_input})
# Tokenize the conversation history
encodeds = tokenizer.apply_chat_template(conversation_history, return_tensors="pt")
model_inputs = encodeds.to(device)
# Generate response
generated_ids = model.generate(
model_inputs,
max_new_tokens=1024,
do_sample=True,
eos_token_id=eos_token,
temperature=0.7,
top_p=0.95
)
# Decode the response
generated_text = tokenizer.decode(generated_ids[0][model_inputs.shape[1]:], skip_special_tokens=True)
# Add assistant response to history
conversation_history.append({"role": "assistant", "content": generated_text})
# Print the response
print(f"Asistan: {generated_text}")
# Optional: Clear memory when done
del model
torch.cuda.empty_cache()
9. Ethical Considerations
- Responsible Use: This model should be used responsibly and ethically.
- Transparency: Users should be informed that they are interacting with an AI chatbot.
- Bias Mitigation: Efforts should be made to mitigate bias in the model's responses.
10. Limitations and Future Work
- Context Length: The model has a limited context length, which may affect its ability to handle long conversations.
- Knowledge Updates: The model's knowledge is static and needs to be updated periodically.
- Future Work: Future work could focus on improving the model's context length, knowledge updates, and bias mitigation.
- Downloads last month
- 22
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.