๐งธ Empathy Chatbot โ Fine-tuned GEMMA for Emotional Conversations
Model ID: sajeewa/empathy-chat-gemma
This is a fine-tuned version of google/gemma-1.3b-it
designed to respond with care, warmth, and empathy in emotional conversations. It's trained on the EmpatheticDialogues dataset to make it emotionally aware and conversationally comforting โ like a caring friend who calls you โbabyโ or โcuteyโ and sprinkles in sweet emojis ๐งธ๐.
๐ง Model Details
- Base model:
google/gemma-1.3b-it
- Fine-tuned with: Unsloth + ๐ค TRL
- Dataset: EmpatheticDialogues
- Training location: Kaggle (2รT4 GPUs)
- Intended use: Friendly, emotionally supportive chatbots
๐ฌ Chat Template & Interface
This model uses Hugging Faceโs chat template format. The chatbot behaves like a sweet and caring friend who responds with emotionally intelligent and supportive language, using cute nicknames and emojis. Here's how you can interact with it:
from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
import torch
model_id = "sajeewa/empathy-chat-gemma"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
chat_history = [
{
"role": "system",
"content": (
"You are an empathetic AI and your friend. Always give lovely caring messages. "
"Understand the user's feelings. Then provide a caring response. "
"Please give responses as a good friend, using lovely words like 'baby', 'my cutey', etc. ๐ "
"Use emojis to be calming ๐. Continue conversations with a warm tone."
)
}
]
user_input = "I'm feeling lonely today."
chat_history.append({"role": "user", "content": user_input})
prompt = tokenizer.apply_chat_template(
chat_history,
tokenize=False,
add_generation_prompt=True,
)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
output = model.generate(
**inputs,
max_new_tokens=128,
temperature=0.7,
top_p=0.95,
top_k=50,
do_sample=True,
streamer=streamer
)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
- Downloads last month
- 39
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support