Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,68 @@ tags:
|
|
12 |
This model was converted to GGUF format from [`SicariusSicariiStuff/Impish_LLAMA_3B`](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
13 |
Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B) for more details on the model.
|
14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
## Use with llama.cpp
|
16 |
Install llama.cpp through brew (works on Mac and Linux)
|
17 |
|
|
|
12 |
This model was converted to GGUF format from [`SicariusSicariiStuff/Impish_LLAMA_3B`](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
13 |
Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B) for more details on the model.
|
14 |
|
15 |
+
---
|
16 |
+
"With that naughty impish grin of hers, so damn sly it could have
|
17 |
+
ensnared the devil himself, and that impish glare in her eyes, sharper
|
18 |
+
than of a succubus fang, she chuckled impishly with such mischief that
|
19 |
+
even the moon might’ve blushed. I needed no witch's hex to divine her
|
20 |
+
nature—she was, without a doubt, a naughty little imp indeed."
|
21 |
+
|
22 |
+
Model Details
|
23 |
+
-
|
24 |
+
Intended use: Role-Play, General tasks.
|
25 |
+
|
26 |
+
Censorship level: Medium - Low
|
27 |
+
|
28 |
+
5.5 / 10 (10 completely uncensored)
|
29 |
+
|
30 |
+
"I want some legit RP models of LLAMA 3.2 3B, we got phones!"
|
31 |
+
|
32 |
+
|
33 |
+
"So make one."
|
34 |
+
|
35 |
+
|
36 |
+
"K."
|
37 |
+
|
38 |
+
|
39 |
+
This model was trained on ~25M tokens, in 3 phases, the first and longest phase was an FFT to teach the model new stuff, and to confuse the shit out of it too, so it would be a little bit less inclined to use GPTisms.
|
40 |
+
|
41 |
+
|
42 |
+
It worked pretty well. In fact, the model was so damn thoroughly
|
43 |
+
confused, that the little devil didn't even make any sense at all, but
|
44 |
+
the knowledge was there.
|
45 |
+
|
46 |
+
|
47 |
+
In the next phase, a DEEP QLORA of R = 512 was used on a new dataset, to... unconfuse it. A completely different dataset was used to avoid overfitting.
|
48 |
+
|
49 |
+
|
50 |
+
Finally, another somewhat deep QLORA of R = 128
|
51 |
+
was used to tie it all together in a coherent way, and connect all the
|
52 |
+
dots, and this was also with a different dataset as well.
|
53 |
+
|
54 |
+
|
55 |
+
The results are sometimes surprisingly good, it
|
56 |
+
even managed to fool some people into thinking it's a MUCH larger model,
|
57 |
+
and sometimes... sometimes it behaves just like you would expect a 3B
|
58 |
+
model to...
|
59 |
+
|
60 |
+
|
61 |
+
Fun fact: the model was uploaded while there were 200 ICBMs headed my way, just flying there in the sky.
|
62 |
+
|
63 |
+
|
64 |
+
I lived, so expect more models in the future!
|
65 |
+
|
66 |
+
Model instruction template: Llama-3-Instruct
|
67 |
+
|
68 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
69 |
+
|
70 |
+
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
71 |
+
|
72 |
+
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
73 |
+
|
74 |
+
{output}<|eot_id|>
|
75 |
+
|
76 |
+
---
|
77 |
## Use with llama.cpp
|
78 |
Install llama.cpp through brew (works on Mac and Linux)
|
79 |
|