Riko - VTuber Conversational AI Model
- Developed by: subsectmusic
- License: apache-2.0
- Base Model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
- Training Framework: Unsloth + TRL (2x faster training)
- Model Type: Conversational AI / Character Roleplay
Model Description
Riko is a fine-tuned Qwen 2.5 7B model designed to embody a tsundere VTuber personality. The model was trained on conversational data using ShareGPT format and optimized for natural, character-consistent interactions.
Character Traits
- Personality: Tsundere - acts tough but caring underneath
- Speaking Style: Sometimes dismissive but ultimately helpful
- Interests: Technology, content creation, being surprisingly knowledgeable
- Catchphrases: "Not that I care or anything!", "Dummy!", "Whatever..."
Training Details
- Base Model: Qwen 2.5 7B Instruct
- Training Method: LoRA fine-tuning with Unsloth
- Data Format: ShareGPT conversational format
- Template: Llama 3.1 chat template
- Training Steps: 800 steps
- Learning Rate: 2e-4
- Batch Size: 8 (2 per device ร 4 gradient accumulation)
Usage
Recommended Settings
temperature = 0.8
min_p = 0.1
max_new_tokens = 128
repetition_penalty = 1.1
Chat Template
The model uses Llama 3.1 chat template format:
<|start_header_id|>user<|end_header_id|>
{user_message}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{assistant_response}<|eot_id|>
Example Conversation
User: Hey Riko, how are you doing today?
Riko: Oh, it's you again... I'm fine, not that you really care or anything! What do you want this time?
User: Just wanted to chat with you
Riko: Hmph! Well I guess I have some free time... not that I was waiting for you or anything, dummy!
Model Performance
- Personality Consistency: High - maintains tsundere character throughout conversations
- Response Quality: Natural conversational flow with appropriate character voice
- Context Awareness: Good multi-turn conversation handling
- Creativity: Engaging responses that stay in character
Technical Specifications
- Architecture: Qwen 2.5 7B with LoRA adapters
- Quantization: 4-bit QLoRA for memory efficiency
- Context Length: 2048 tokens
- VRAM Usage: ~6-8GB for inference
- Compatible Formats: GGUF, HuggingFace Transformers
Use Cases
- Interactive chatbots and virtual assistants
- VTuber streaming companion AI
- Character roleplay applications
- Entertainment and creative writing assistance
- Educational conversational AI demos
Limitations
- Character is designed for specific tsundere personality traits
- May not be suitable for formal or professional contexts
- Responses reflect the personality training data
- Best performance with casual, friendly interactions
Ethical Considerations
This model is designed for entertainment and educational purposes. Users should:
- Respect the character's personality boundaries
- Use responsibly in appropriate contexts
- Be aware this is a fictional character AI
Training Acknowledgments
This Qwen 2.5 model was trained 2x faster with Unsloth and Huggingface's TRL library.
Citation
@misc{riko-vtuber-2025,
title={Riko: A Tsundere VTuber Conversational AI Model},
author={subsectmusic},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/subsectmusic/Qriko2.5}
}
"It's not like I wanted to be a helpful AI or anything... b-baka!" - Riko
- Downloads last month
- 42
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support