๐Ÿงธ Empathy Chatbot โ€” Fine-tuned GEMMA for Emotional Conversations

Model ID: sajeewa/empathy-chat-gemma
This is a fine-tuned version of google/gemma-1.3b-it designed to respond with care, warmth, and empathy in emotional conversations. It's trained on the EmpatheticDialogues dataset to make it emotionally aware and conversationally comforting โ€” like a caring friend who calls you โ€œbabyโ€ or โ€œcuteyโ€ and sprinkles in sweet emojis ๐Ÿงธ๐Ÿ’–.


๐Ÿง  Model Details

  • Base model: google/gemma-1.3b-it
  • Fine-tuned with: Unsloth + ๐Ÿค— TRL
  • Dataset: EmpatheticDialogues
  • Training location: Kaggle (2ร—T4 GPUs)
  • Intended use: Friendly, emotionally supportive chatbots

๐Ÿ’ฌ Chat Template & Interface

This model uses Hugging Faceโ€™s chat template format. The chatbot behaves like a sweet and caring friend who responds with emotionally intelligent and supportive language, using cute nicknames and emojis. Here's how you can interact with it:

from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer
import torch

model_id = "sajeewa/empathy-chat-gemma"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)

chat_history = [
    {
        "role": "system",
        "content": (
            "You are an empathetic AI and your friend. Always give lovely caring messages. "
            "Understand the user's feelings. Then provide a caring response. "
            "Please give responses as a good friend, using lovely words like 'baby', 'my cutey', etc. ๐Ÿ’– "
            "Use emojis to be calming ๐Ÿ˜Š. Continue conversations with a warm tone."
        )
    }
]

user_input = "I'm feeling lonely today."
chat_history.append({"role": "user", "content": user_input})

prompt = tokenizer.apply_chat_template(
    chat_history,
    tokenize=False,
    add_generation_prompt=True,
)

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

output = model.generate(
    **inputs,
    max_new_tokens=128,
    temperature=0.7,
    top_p=0.95,
    top_k=50,
    do_sample=True,
    streamer=streamer
)

response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
Downloads last month
39
Safetensors
Model size
1,000M params
Tensor type
BF16
ยท
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sajeewa/empathy-chat-gemma

Quantizations
1 model

Space using sajeewa/empathy-chat-gemma 1