Boning c commited on
Commit
de2dd7c
·
verified ·
1 Parent(s): 155d521

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -3
README.md CHANGED
@@ -1,3 +1,78 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🧠 Sam-large-v1-speacil
2
+
3
+ **Model Author:** [Smilyai-labs](https://huggingface.co/Smilyai-labs)
4
+ **Model Size:** \~1.1B parameters
5
+ **Architecture:** Decoder-only Transformer
6
+ **Base Model:** based on TinyLLaMA
7
+ **License:** MIT
8
+ **Language:** English
9
+ **Tags:** #text-generation, #chatbot, #instruction-tuned, #smilyai, #sam
10
+
11
+ ## 📝 Model Summary
12
+
13
+ **Sam-large-v1-speacil** is a customized large language model developed by Smilyai-labs for conversational AI, instruction-following tasks, and general-purpose text generation. It is a fine-tuned and enhanced variant of the `Sam-large-v1` model, with special improvements in reasoning, identity handling, and emotional response learning.
14
+
15
+ This model is trained to represent the persona “Sam,” an intelligent and slightly chaotic AI assistant with unique behavior traits, making it suitable for role-play bots, experimental dialogue systems, and character-driven applications.
16
+
17
+ ---
18
+
19
+ ## 🧠 Intended Use
20
+
21
+ * Instruction-based text generation
22
+ * Character chat and roleplay
23
+ * Experimental alignment behaviors
24
+ * Creative writing and scenario building
25
+ * Local deployment for private assistant usage
26
+
27
+ ---
28
+
29
+ ## 🚫 Limitations
30
+
31
+ * May hallucinate facts or invent information
32
+ * Can produce unexpected outputs when prompted ambiguously
33
+ * Not suitable for production environments without safety layers
34
+ * Behavior is tuned to have personality traits (like mischief) that may not suit all applications
35
+
36
+ ---
37
+
38
+ ## 📚 Training Details
39
+
40
+ * Fine-tuned on synthetic and curated datasets using LoRA/full fine-tuning
41
+ * Special prompt styles were introduced to enhance behavior
42
+ * Dataset includes:
43
+
44
+ * Multi-step reasoning samples
45
+ * Emotionally reactive instruction responses
46
+ * SmilyAI-specific identity alignment examples
47
+
48
+ ---
49
+
50
+ ## 🔧 How to Use
51
+
52
+ ```python
53
+ from transformers import AutoTokenizer, AutoModelForCausalLM
54
+
55
+ model = AutoModelForCausalLM.from_pretrained("Smilyai-labs/Sam-large-v1-speacil")
56
+ tokenizer = AutoTokenizer.from_pretrained("Smilyai-labs/Sam-large-v1-speacil")
57
+
58
+ input_text = "You are Sam. Who are you?"
59
+ inputs = tokenizer(input_text, return_tensors="pt")
60
+ outputs = model.generate(**inputs, max_new_tokens=100)
61
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
62
+ ```
63
+
64
+ ---
65
+
66
+ ## 🤝 Citation
67
+
68
+ If you use this model in your work, please cite it as:
69
+
70
+ ```bibtex
71
+ @misc{samlargev1speacil2025,
72
+ title={Sam-large-v1-speacil},
73
+ author={Smilyai-labs},
74
+ year={2025},
75
+ publisher={Hugging Face},
76
+ howpublished={\url{https://huggingface.co/Smilyai-labs/Sam-large-v1-speacil}}
77
+ }
78
+ ```