Riko2.5.1 / README.md
subsectmusic's picture
πŸ“ Add Ollama-optimized model card with usage examples
720fa44 verified
metadata
license: apache-2.0
base_model: unsloth/qwen2.5-7b-instruct-bnb-4bit
tags:
  - unsloth
  - trl
  - sft
  - qwen2.5
  - tsundere
  - roleplay
  - character-ai
  - ollama
  - gguf
language:
  - en
library_name: transformers
pipeline_tag: text-generation

🎌 Riko 2.5 - Tsundere AI Assistant

A fine-tuned Qwen 2.5 7B model that roleplays as Riko, a tsundere character who acts tough and dismissive but is caring deep down.

πŸš€ Quick Start with Ollama

# Pull the model
ollama pull subsectmusic/Riko2.5.1

# Start chatting
ollama run subsectmusic/Riko2.5.1
# Example conversation
ollama run subsectmusic/Riko2.5.1 "Hi Riko, how are you today?"

πŸ“‹ Model Details

  • Base Model: Qwen 2.5 7B Instruct
  • Fine-tuning: Supervised Fine-Tuning (SFT) with Unsloth
  • Format: GGUF Q4_K_M (4.68 GB) - Optimized for Ollama
  • Character: Tsundere personality (tough exterior, caring interior)

🎭 Character Personality

Riko exhibits classic tsundere behavior:

  • 😀 Acts tough and dismissive initially
  • πŸ’’ Denies caring ("It's not like I care!")
  • 😳 Gets flustered when her caring side shows
  • πŸ™„ Uses "dummy," "idiot" endearingly
  • ❀️ Gradually warms up in conversations

πŸ’¬ Example Conversations

User: Do you like sunny days?
Riko: I guess sunny days are okay… but who needs that bright, annoying sunlight all the time anyway? It's just… too hot! But if you like them so much, I suppose that's fine… not that I'm jealous of the sun or anything!

User: Hey Riko, what are you up to today?
Riko: Oh, i just recently picked up a new hobby. Why do you ask? I wont let you on on it, if ist that what you want.

πŸ› οΈ Usage in Other Tools

LM Studio

Download unsloth.Q4_K_M.gguf (4.68 GB)

llama.cpp

./main -m unsloth.Q4_K_M.gguf -p "User: Hi Riko!\nRiko: " --temp 0.7

Text Generation WebUI

Load the unsloth.Q4_K_M.gguf file directly

⚑ Performance

  • Model Size: 4.68 GB (Q4_K_M quantized)
  • Memory Usage: ~6-8 GB RAM recommended
  • Speed: Fast inference on CPU/GPU
  • Quality: High quality responses with efficient compression

πŸ”§ Technical Specs

  • Architecture: Qwen 2.5 Transformer
  • Context Length: 2048 tokens
  • Vocabulary: 152k tokens
  • Quantization: Q4_K_M (4-bit with higher quality)
  • Training Time: ~8 minutes on Colab T4

πŸ“ Files Included

  • unsloth.Q4_K_M.gguf - Main quantized model (4.68 GB) ⭐ Recommended
  • unsloth.BF16.gguf - Full precision (15.2 GB)
  • Tokenizer files for compatibility
  • Config files for proper loading

⚠️ Usage Notes

  • Optimized for conversational, casual interactions
  • Best results with tsundere/anime-style roleplay
  • May not perform as well for technical tasks
  • Responds better to friendly, informal prompts

🎯 Recommended Settings

Ollama/LM Studio:

  • Temperature: 0.7-0.9
  • Top-p: 0.9
  • Max tokens: 150-300

For more creative responses:

  • Temperature: 0.8-1.0
  • Top-p: 0.95

πŸ“œ License

Apache 2.0 - Free to use, modify, and distribute!

πŸ™ Credits

  • Base Model: Qwen 2.5 by Alibaba
  • Fine-tuning: Unsloth framework
  • Training: Custom tsundere conversation dataset

🎌 Enjoy chatting with Riko! Remember, she's tough on the outside but sweet on the inside!