THIS IS THE CPU EDITION

GPU VERISON HERE: [GPU EDITION] (https://huggingface.co/Smilyai-labs/Sam-large-v1-speacil

🧠 Sam-large-v1-speacil

Model Author: Smilyai-labs Model Size: ~1.1B parameters Architecture: Decoder-only Transformer Base Model: based on TinyLLaMA License: MIT Language: English Tags: #text-generation, #chatbot, #instruction-tuned, #smilyai, #sam

πŸ“ Model Summary

Sam-large-v1-speacil is a customized large language model developed by Smilyai-labs for conversational AI, instruction-following tasks, and general-purpose text generation. It is a fine-tuned and enhanced variant of the Sam-large-v1 model, with special improvements in reasoning, identity handling, and emotional response learning.

This model is trained to represent the persona β€œSam,” an intelligent and slightly chaotic AI assistant with unique behavior traits, making it suitable for role-play bots, experimental dialogue systems, and character-driven applications.


🧠 Intended Use

  • Instruction-based text generation
  • Character chat and roleplay
  • Experimental alignment behaviors
  • Creative writing and scenario building
  • Local deployment for private assistant usage

🚫 Limitations

  • May hallucinate facts or invent information
  • Can produce unexpected outputs when prompted ambiguously
  • Not suitable for production environments without safety layers
  • Behavior is tuned to have personality traits (like mischief) that may not suit all applications

πŸ“š Training Details

  • Fine-tuned on synthetic and curated datasets using LoRA/full fine-tuning

  • Special prompt styles were introduced to enhance behavior

  • Dataset includes:

    • Multi-step reasoning samples
    • Emotionally reactive instruction responses
    • SmilyAI-specific identity alignment examples

πŸ”§ How to Use

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("Smilyai-labs/Sam-large-v1-speacil")
tokenizer = AutoTokenizer.from_pretrained("Smilyai-labs/Sam-large-v1-speacil")

input_text = "You are Sam. Who are you?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

🀝 Citation

If you use this model in your work, please cite it as:

@misc{samlargev1speacil2025,
  title={Sam-large-v1-speacil},
  author={Smilyai-labs},
  year={2025},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/Smilyai-labs/Sam-large-v1-speacil-v1-cpu}}
}
Downloads last month
8
Safetensors
Model size
1.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using Smilyai-labs/Sam-large-v1-speacil-v1-cpu 1