Triangle104 commited on
Commit
88dd236
·
verified ·
1 Parent(s): d98ec7e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -12,6 +12,44 @@ tags:
12
  This model was converted to GGUF format from [`SicariusSicariiStuff/Nano_Imp_1B`](https://huggingface.co/SicariusSicariiStuff/Nano_Imp_1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Nano_Imp_1B) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
 
12
  This model was converted to GGUF format from [`SicariusSicariiStuff/Nano_Imp_1B`](https://huggingface.co/SicariusSicariiStuff/Nano_Imp_1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Nano_Imp_1B) for more details on the model.
14
 
15
+ ---
16
+ It's the 10th of May, 2025—lots of progress is being made in the world of AI (DeepSeek, Qwen, etc...)—but still, there has yet to be a fully coherent 1B RP model. Why?
17
+
18
+
19
+ Well, at 1B size, the mere fact a model is even
20
+ coherent is some kind of a marvel—and getting it to roleplay feels like
21
+ you're asking too much from 1B parameters. Making very small yet smart
22
+ models is quite hard, making one that does RP is exceedingly hard. I
23
+ should know.
24
+
25
+
26
+ I've made the world's first 3B roleplay model—Impish_LLAMA_3B—and I thought that this was the absolute minimum size for coherency and RP capabilities. I was wrong.
27
+
28
+
29
+ One of my stated goals was to make AI accessible and available for everyone—but not everyone could run 13B or even 8B models. Some people only have mid-tier phones, should they be left behind?
30
+
31
+
32
+ A growing sentiment often says something along the lines of:
33
+
34
+
35
+
36
+
37
+ If your waifu runs on someone else's hardware—then she's not your waifu.
38
+
39
+
40
+
41
+
42
+ I'm not an expert in waifu culture, but I do agree that people should
43
+ be able to run models locally, without their data (knowingly or
44
+ unknowingly) being used for X or Y.
45
+
46
+
47
+ I thought my goal of making a roleplay model that everyone could run
48
+ would only be realized sometime in the future—when mid-tier phones got
49
+ the equivalent of a high-end Snapdragon chipset. Again I was wrong, as
50
+ this changes today.
51
+
52
+ ---
53
  ## Use with llama.cpp
54
  Install llama.cpp through brew (works on Mac and Linux)
55