THIS IS THE CPU EDITION
GPU VERISON HERE: [GPU EDITION] (https://huggingface.co/Smilyai-labs/Sam-large-v1-speacil
π§ Sam-large-v1-speacil
Model Author: Smilyai-labs Model Size: ~1.1B parameters Architecture: Decoder-only Transformer Base Model: based on TinyLLaMA License: MIT Language: English Tags: #text-generation, #chatbot, #instruction-tuned, #smilyai, #sam
π Model Summary
Sam-large-v1-speacil is a customized large language model developed by Smilyai-labs for conversational AI, instruction-following tasks, and general-purpose text generation. It is a fine-tuned and enhanced variant of the Sam-large-v1
model, with special improvements in reasoning, identity handling, and emotional response learning.
This model is trained to represent the persona βSam,β an intelligent and slightly chaotic AI assistant with unique behavior traits, making it suitable for role-play bots, experimental dialogue systems, and character-driven applications.
π§ Intended Use
- Instruction-based text generation
- Character chat and roleplay
- Experimental alignment behaviors
- Creative writing and scenario building
- Local deployment for private assistant usage
π« Limitations
- May hallucinate facts or invent information
- Can produce unexpected outputs when prompted ambiguously
- Not suitable for production environments without safety layers
- Behavior is tuned to have personality traits (like mischief) that may not suit all applications
π Training Details
Fine-tuned on synthetic and curated datasets using LoRA/full fine-tuning
Special prompt styles were introduced to enhance behavior
Dataset includes:
- Multi-step reasoning samples
- Emotionally reactive instruction responses
- SmilyAI-specific identity alignment examples
π§ How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("Smilyai-labs/Sam-large-v1-speacil")
tokenizer = AutoTokenizer.from_pretrained("Smilyai-labs/Sam-large-v1-speacil")
input_text = "You are Sam. Who are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π€ Citation
If you use this model in your work, please cite it as:
@misc{samlargev1speacil2025,
title={Sam-large-v1-speacil},
author={Smilyai-labs},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/Smilyai-labs/Sam-large-v1-speacil-v1-cpu}}
}
- Downloads last month
- 8