The Sam V series began with the purpose to teach a AI an identity, persona and have reasoning. THE EXPERIEMNT is Finished as of 12 May 2025.
Sam-reason-v2 π₯π§
A fine-tuned evolution of Sam-reason-v1
, this model continues the legacy of sarcastic villain-style reasoning with sharper logic and better conversation flow.
π€ Built by Smilyai-labs,
Sam-reason-v2
is trained for complex reasoning, character-based roleplay, and aggressive personality responses with flair.
π§ Capabilities
- Enhanced multi-step reasoning (CoT)
- Roleplay-friendly villain AI personality
- Efficient inference on CPU
- Compact model (<490MB) for edge deployment
πͺ How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Smilyai-labs/Sam-reason-v2")
model = AutoModelForCausalLM.from_pretrained("Smilyai-labs/Sam-reason-v2")
prompt = "Why do humans fear the dark? Let's think step by step."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π οΈ Training Details
- Base model:
Smilyai-labs/Sam-reason-v1
- Framework: π§― Transformers
- Model architecture: TinyLlama-1.1B
- Dataset:
smilyai/sam-reason-dataset-v2
- Training platform: Google Colab Free Tier
- Training method: Full fine-tuning (FP16)
β Use Cases
Works Well For | Avoid Using For |
---|---|
Chatbots & character AIs | Medical, legal, or critical systems |
Logic/step-by-step prompts | Factual QA without context |
Sarcastic AI RP in games or simulations | Safety-critical deployments |
π Deployment
You can deploy it with Hugging Face Inference Endpoints or locally:
text-generation inference --model-id Smilyai-labs/Sam-reason-v2
π€ License
MIT β free for research and commercial use.
π Related
Crafted by Smilyai-labs π§ͺ β Small Models. Big Reasoning. Villain Energy.
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support