Update README.md
Browse files
README.md
CHANGED
@@ -15,54 +15,47 @@ tags:
|
|
15 |
|
16 |
13B-Thorns-l2 utilizes a new merge method called Spherical Linear Interpolation. By merging data as a spherical vector store concept, a combined pair of models have a smoother transition between feature spaces that are characteristic of each model, resulting in a more coherent fusion of both model's unique strengths.
|
17 |
|
|
|
18 |
Thorns' design is based on the concept of purposed segmentation-
|
19 |
|
|
|
20 |
Logic Segment (MK1):
|
21 |
|
|
|
22 |
Fine-Tuned parent models were hand selected and reviewed for datasets, performance, least restrictive censorship, and community perception of coherence and utility. Ultimately we decided on four models to merge in pairs of two, then combine those offspring for a quad merged logic cluster.
|
23 |
All four models were merged using the SLERP method. Yes the name is annoyingly funny. SLERP.
|
24 |
|
|
|
25 |
Creativity and Imagination Segment (MK1):
|
26 |
|
|
|
27 |
Flawed first approach (a takeaway on LoRAs);
|
28 |
|
|
|
29 |
We then decided the creativity and imagination segment could be as simple as one model, especially if its dataset design, tagging, training quality, and proven track record is above and beyond. KoboldAI's Holodeck model is the result of a dataset that is years of collected, organized, tagged, deduped, and cleaned data. Holodeck alone would be beyond sufficient for the segment we view as the 'subconscious' segment of the model ensemble, however we applied the LIMA RP PEFT to it for extended variety of a different kind.
|
30 |
That's where we got carried away. LoRAs offer unique augmentation to model merge possibilities, and the decision was made to take the result of that segment and add two more LoRAs to see if they further extended Holodeck, settling on Kimiko and Janine; two very different RP and conversational LoRAs.
|
31 |
This was a bad move, as when we SLERP merged that version of the imagination segment to the logic segment the result was a ranting mess that followed instructions but was the equivalent of a child scribbling all over the place and ignoring obvious chains of logic and a mushy amalgam of awkward creative behavior that had no semblance of coherency.
|
32 |
The composite model was slated to be named 13B-Astronomicon; after all the work that went into it and the flatly bland result, the name was abandoned and the next move, which is a byproduct experiment of Astronomicon is what became Thorn.. because this project became a thorn in our side.
|
33 |
|
|
|
34 |
Because pain is fun, and persistence in design iteration is the only way forward, we reworked our approach to both segment ensembles following one idea - all three Roleplay and Conversational LoRAs stay no matter what because sure why not add arbitrary rules to the redesign phase at this point.
|
35 |
|
|
|
36 |
--
|
37 |
|
|
|
38 |
Logic and Creativity Segments (MK2 Final - what this model is actually comprised of):
|
39 |
|
40 |
So after a few key meetings with our top teams of memegineers we distilled the perfect solution, which was prompty approved by the Roko's Basilisk Shadow Council - fast tracking what is now 13B-Thorn-l2 for production assembly.
|
41 |
Also none of that shit happened, I just redid everything like this:
|
42 |
|
|
|
43 |
-Model Merge Ensemble Key-
|
44 |
{} = SLERP Merge | [] = PEFT Merge | () = Composite Model
|
45 |
({({NousHermes+Chronos}[Kimiko])+({Platupus+AiroborosM2.0}[Janine])}{Holodeck[LIMA RP]})
|
46 |
|
47 |
|
48 |
|
49 |
-
|
50 |
-
[SuperCOT([gtp4xalpaca(manticorechatpygalpha+vicunaunlocked)]+[StoryV2(kaiokendev-SuperHOT-LoRA-prototype30b-8192)])]
|
51 |
-
|
52 |
-
This model is the result of an experimental use of LoRAs on language models and model merges that are not the base HuggingFace-format LLaMA model they were intended for.
|
53 |
-
The desired outcome is to additively apply desired features without paradoxically watering down a model's effective behavior.
|
54 |
-
|
55 |
-
Potential limitations - LoRAs applied on top of each other may intercompete.
|
56 |
-
|
57 |
-
Subjective results - very promising. Further experimental tests and objective tests are required.
|
58 |
-
|
59 |
-
Instruct and Setup Suggestions:
|
60 |
-
|
61 |
-
Alpaca instruct is primary, Vicuna instruct format may work.
|
62 |
-
If using KoboldAI or Text-Generation-WebUI, recommend switching between Godlike and Storywriter presets and adjusting output length + instructions in memory.
|
63 |
-
Other presets as well as custom settings can yield highly different results, especially Temperature.
|
64 |
-
If poking it with a stick doesn't work try poking harder.
|
65 |
-
|
66 |
## Language Models and LoRAs Used Credits:
|
67 |
|
68 |
manticore-30b-chat-pyg-alpha [Epoch0.4] by openaccess-ai-collective
|
|
|
15 |
|
16 |
13B-Thorns-l2 utilizes a new merge method called Spherical Linear Interpolation. By merging data as a spherical vector store concept, a combined pair of models have a smoother transition between feature spaces that are characteristic of each model, resulting in a more coherent fusion of both model's unique strengths.
|
17 |
|
18 |
+
|
19 |
Thorns' design is based on the concept of purposed segmentation-
|
20 |
|
21 |
+
|
22 |
Logic Segment (MK1):
|
23 |
|
24 |
+
|
25 |
Fine-Tuned parent models were hand selected and reviewed for datasets, performance, least restrictive censorship, and community perception of coherence and utility. Ultimately we decided on four models to merge in pairs of two, then combine those offspring for a quad merged logic cluster.
|
26 |
All four models were merged using the SLERP method. Yes the name is annoyingly funny. SLERP.
|
27 |
|
28 |
+
|
29 |
Creativity and Imagination Segment (MK1):
|
30 |
|
31 |
+
|
32 |
Flawed first approach (a takeaway on LoRAs);
|
33 |
|
34 |
+
|
35 |
We then decided the creativity and imagination segment could be as simple as one model, especially if its dataset design, tagging, training quality, and proven track record is above and beyond. KoboldAI's Holodeck model is the result of a dataset that is years of collected, organized, tagged, deduped, and cleaned data. Holodeck alone would be beyond sufficient for the segment we view as the 'subconscious' segment of the model ensemble, however we applied the LIMA RP PEFT to it for extended variety of a different kind.
|
36 |
That's where we got carried away. LoRAs offer unique augmentation to model merge possibilities, and the decision was made to take the result of that segment and add two more LoRAs to see if they further extended Holodeck, settling on Kimiko and Janine; two very different RP and conversational LoRAs.
|
37 |
This was a bad move, as when we SLERP merged that version of the imagination segment to the logic segment the result was a ranting mess that followed instructions but was the equivalent of a child scribbling all over the place and ignoring obvious chains of logic and a mushy amalgam of awkward creative behavior that had no semblance of coherency.
|
38 |
The composite model was slated to be named 13B-Astronomicon; after all the work that went into it and the flatly bland result, the name was abandoned and the next move, which is a byproduct experiment of Astronomicon is what became Thorn.. because this project became a thorn in our side.
|
39 |
|
40 |
+
|
41 |
Because pain is fun, and persistence in design iteration is the only way forward, we reworked our approach to both segment ensembles following one idea - all three Roleplay and Conversational LoRAs stay no matter what because sure why not add arbitrary rules to the redesign phase at this point.
|
42 |
|
43 |
+
|
44 |
--
|
45 |
|
46 |
+
|
47 |
Logic and Creativity Segments (MK2 Final - what this model is actually comprised of):
|
48 |
|
49 |
So after a few key meetings with our top teams of memegineers we distilled the perfect solution, which was prompty approved by the Roko's Basilisk Shadow Council - fast tracking what is now 13B-Thorn-l2 for production assembly.
|
50 |
Also none of that shit happened, I just redid everything like this:
|
51 |
|
52 |
+
|
53 |
-Model Merge Ensemble Key-
|
54 |
{} = SLERP Merge | [] = PEFT Merge | () = Composite Model
|
55 |
({({NousHermes+Chronos}[Kimiko])+({Platupus+AiroborosM2.0}[Janine])}{Holodeck[LIMA RP]})
|
56 |
|
57 |
|
58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
## Language Models and LoRAs Used Credits:
|
60 |
|
61 |
manticore-30b-chat-pyg-alpha [Epoch0.4] by openaccess-ai-collective
|