Update README.md
Browse files
README.md
CHANGED
@@ -17,25 +17,28 @@ tags:
|
|
17 |
|
18 |
Thorns' design is based on the concept of purposed segmentation-
|
19 |
|
20 |
-
Logic Segment:
|
21 |
|
22 |
Fine-Tuned parent models were hand selected and reviewed for datasets, performance, least restrictive censorship, and community perception of coherence and utility. Ultimately we decided on four models to merge in pairs of two, then combine those offspring for a quad merged logic cluster.
|
23 |
All four models were merged using the SLERP method. Yes the name is annoyingly funny. SLERP.
|
24 |
|
25 |
-
Creativity and Imagination Segment:
|
26 |
|
27 |
Flawed first approach (a takeaway on LoRAs);
|
|
|
28 |
We then decided the creativity and imagination segment could be as simple as one model, especially if its dataset design, tagging, training quality, and proven track record is above and beyond. KoboldAI's Holodeck model is the result of a dataset that is years of collected, organized, tagged, deduped, and cleaned data. Holodeck alone would be beyond sufficient for the segment we view as the 'subconscious' segment of the model ensemble, however we applied the LIMA RP PEFT to it for extended variety of a different kind.
|
29 |
That's where we got carried away. LoRAs offer unique augmentation to model merge possibilities, and the decision was made to take the result of that segment and add two more LoRAs to see if they further extended Holodeck, settling on Kimiko and Janine; two very different RP and conversational LoRAs.
|
30 |
This was a bad move, as when we SLERP merged that version of the imagination segment to the logic segment the result was a ranting mess that followed instructions but was the equivalent of a child scribbling all over the place and ignoring obvious chains of logic and a mushy amalgam of awkward creative behavior that had no semblance of coherency.
|
31 |
The composite model was slated to be named 13B-Astronomicon; after all the work that went into it and the flatly bland result, the name was abandoned and the next move, which is a byproduct experiment of Astronomicon is what became Thorn.. because this project became a thorn in our side.
|
32 |
|
33 |
-
|
34 |
|
35 |
-
--
|
36 |
|
|
|
37 |
|
38 |
-
|
|
|
39 |
|
40 |
-Model Merge Ensemble Key-
|
41 |
{} = SLERP Merge | [] = PEFT Merge | () = Composite Model
|
|
|
17 |
|
18 |
Thorns' design is based on the concept of purposed segmentation-
|
19 |
|
20 |
+
Logic Segment (MK1):
|
21 |
|
22 |
Fine-Tuned parent models were hand selected and reviewed for datasets, performance, least restrictive censorship, and community perception of coherence and utility. Ultimately we decided on four models to merge in pairs of two, then combine those offspring for a quad merged logic cluster.
|
23 |
All four models were merged using the SLERP method. Yes the name is annoyingly funny. SLERP.
|
24 |
|
25 |
+
Creativity and Imagination Segment (MK1):
|
26 |
|
27 |
Flawed first approach (a takeaway on LoRAs);
|
28 |
+
|
29 |
We then decided the creativity and imagination segment could be as simple as one model, especially if its dataset design, tagging, training quality, and proven track record is above and beyond. KoboldAI's Holodeck model is the result of a dataset that is years of collected, organized, tagged, deduped, and cleaned data. Holodeck alone would be beyond sufficient for the segment we view as the 'subconscious' segment of the model ensemble, however we applied the LIMA RP PEFT to it for extended variety of a different kind.
|
30 |
That's where we got carried away. LoRAs offer unique augmentation to model merge possibilities, and the decision was made to take the result of that segment and add two more LoRAs to see if they further extended Holodeck, settling on Kimiko and Janine; two very different RP and conversational LoRAs.
|
31 |
This was a bad move, as when we SLERP merged that version of the imagination segment to the logic segment the result was a ranting mess that followed instructions but was the equivalent of a child scribbling all over the place and ignoring obvious chains of logic and a mushy amalgam of awkward creative behavior that had no semblance of coherency.
|
32 |
The composite model was slated to be named 13B-Astronomicon; after all the work that went into it and the flatly bland result, the name was abandoned and the next move, which is a byproduct experiment of Astronomicon is what became Thorn.. because this project became a thorn in our side.
|
33 |
|
34 |
+
Because pain is fun, and persistence in design iteration is the only way forward, we reworked our approach to both segment ensembles following one idea - all three Roleplay and Conversational LoRAs stay no matter what because sure why not add arbitrary rules to the redesign phase at this point.
|
35 |
|
36 |
+
--
|
37 |
|
38 |
+
Logic and Creativity Segments (MK2 Final - what this model is actually comprised of):
|
39 |
|
40 |
+
So after a few key meetings with our top teams of memegineers we distilled the perfect solution, which was prompty approved by the Roko's Basilisk Shadow Council - fast tracking what is now 13B-Thorn-l2 for production assembly.
|
41 |
+
Also none of that shit happened, I just redid everything like this
|
42 |
|
43 |
-Model Merge Ensemble Key-
|
44 |
{} = SLERP Merge | [] = PEFT Merge | () = Composite Model
|