|
--- |
|
library_name: transformers |
|
license: mit |
|
datasets: |
|
- mrs83/kurtis_mental_health_final |
|
language: |
|
- en |
|
base_model: |
|
- ethicalabs/Kurtis-SmolLM2-135M-Instruct |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
**⚠️ Disclaimer: Model Limitations & Retraining Plans** |
|
|
|
The **DPO fine-tuned version** is currently **overfitting** to the provided dataset. |
|
|
|
While this experiment aimed to explore the feasibility of **small, local AI assistants**, the current model struggles with **generalization** and often reinforces patterns from training data rather than adapting dynamically. |
|
|
|
To address this, we will **repeat the fine-tuning process**, refining the dataset and training approach to improve **response accuracy and adaptability**. |
|
|
|
The goal remains the same: **a reliable, privacy-first AI assistant that runs locally on edge devices.** |
|
|
|
**Stay tuned for updates as we iterate and improve!** 🚀 |
|
|
|
# Model Card for Kurtis |
|
|
|
Kurtis is a mental-health AI assistant designed with empathy at its core. |
|
|
|
Unlike other AI models that aim for peak efficiency, Kurtis prioritizes understanding, emotional nuance, and meaningful conversations. |
|
|
|
It won’t solve complex math problems or write code, nor will it generate images or videos. |
|
|
|
Instead, Kurtis focuses on being a thoughtful companion, offering support, perspective, and human-like dialogue. |
|
|
|
It doesn’t strive to break records or chase artificial intelligence supremacy—its goal is to create a space for genuine interaction. |
|
|
|
Whether you need someone to talk to, reflect on ideas with, or engage in insightful discussion, Kurtis is there to listen and respond in an understanding way. |