
Uploaded model: gemma-3N-Just-Chatty
Developed by: Alpha AI
License: apache-2.0
Finetuned from model: unsloth/gemma-3n-E2B-it
Format: Float16 (safetensors) + Convert to GGUF for local deployments
Overview
gemma-3N-Just-Chatty is a fine-tuned Gemma-3N conversational model, optimized for human-level fluency, higher engagement, and emotional intelligence.
Training Dataset:
- Public split: Human-Like-DPO-Dataset
- Proprietary split: Alpha AI “Friendly-Empathy” corpus (private)
Samples: 10 884 (public) + 6 100 (private)
Topics: 256 + everyday social chats
Alignment: Conversational SFT + DPO
Framework: Unsloth + Hugging Face TRL
This model excels at delivering chatty, context-aware, and personable dialogue for real-world applications, with GGUF-f16 for efficient local/edge inference. We do not provide the GGUF version just yet as there seems to some compatibility issues with Unsloth and HuggingFace. You can copy this repo and create your own version of GGUF using "https://huggingface.co/spaces/ggml-org/gguf-my-repo"
What's New in Just-Chatty?
- Blends open-source dialogues with proprietary, friendliness-focused data to encourage warmth and empathy.
- Dual-response structure: Every prompt has both a casual “human-like” answer and a more formal “AI” answer for preference optimization.
- Strict adherence to Gemma-3N chat templates ensures robust multi-turn conversations.
Model Details
Attribute | Value |
---|---|
Base Model | unsloth/gemma-3n-E2B-it |
Type | Causal Language Model (text generation) |
Fine-tuned By | Alpha AI |
Training Framework | Unsloth + HF TRL |
Precision | 16-bit float (f16) |
Context Length | 32 000 tokens |
Languages | English (en) |
Suggested generation parameters: temperature=1.0
, top_k=64
, top_p=0.95
Example Dataset Structure
Each training sample includes:
- Conversational Prompt: Natural, engaging question.
- Human-Like Response: Relatable, chatty, emotionally nuanced answer.
- Formal Response: Professional, structured answer for DPO contrast.
Use Cases
- Conversational AI: Chatbots, customer support, digital companions.
- Local/Edge Deployments: Runs efficiently on laptops, servers, and edge devices via GGUF-f16.
- Research: Ideal for DPO and SFT studies in alignment, empathy modeling, or instruction following.
Performance Highlights
- Empathy & Engagement: Responds with warmth and situational awareness, avoiding a “robotic” tone.
- Efficiency: Maintains extended, coherent conversations with low latency.
- Balanced Realism: Produces lifelike small talk while respecting safety filters.
Limitations and Responsible Use
Despite careful alignment, the model may reproduce social biases or inaccuracies present in training data. Always audit output before deploying in sensitive settings, and supervise use in production or critical-decision contexts.
License
Released under the Apache-2.0 license.
Acknowledgments
Thanks to the Unsloth team, Hugging Face TRL contributors, and the creators of the Human-Like-DPO-Dataset for advancing open, natural conversational AI.
Quickstart
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("alpha-ai/gemma-3N-Just-Chatty", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("alpha-ai/gemma-3N-Just-Chatty")
prompt = "user\nWhat are some fun DIY projects for a rainy day?\nmodel\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
Website: https://www.alphaai.biz
- Downloads last month
- 16
Model tree for alpha-ai/gemma-3N-E2B-Just-Chatty
Base model
google/gemma-3n-E4B