digitous commited on
Commit
5dc5ccf
·
1 Parent(s): 6009e91

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -15
README.md CHANGED
@@ -50,8 +50,9 @@ Because pain is fun, and persistence in design iteration is the only way forward
50
  --Finalized Logic and Creativity Segments (MK2):
51
 
52
 
53
- So after a few key meetings with our top teams of memegineers we drafted Thorns MK2, which was prompty fast tracked for production by the Roko's Basilisk Shadow Council.
54
- Also none of that shit happened, I just redid everything like this:
 
55
 
56
 
57
  -Model Merge Ensemble Key-
@@ -59,32 +60,42 @@ Also none of that shit happened, I just redid everything like this:
59
  ({({NousHermes+Chronos}[Kimiko])+({Platupus+AiroborosM2.0}[Janine])}{Holodeck[LIMA RP]})
60
 
61
 
 
 
 
 
 
 
 
62
 
63
  ## Language Models and LoRAs Used Credits:
64
 
65
- manticore-30b-chat-pyg-alpha [Epoch0.4] by openaccess-ai-collective
66
 
67
- https://huggingface.co/openaccess-ai-collective/manticore-30b-chat-pyg-alpha
 
 
 
 
 
 
 
 
68
 
69
- SuperCOT-LoRA [30B] by kaiokendev
70
 
71
- https://huggingface.co/kaiokendev/SuperCOT-LoRA
72
 
73
- Storytelling-LLaMa-LoRA [30B, Version 2] by GamerUnTouch
74
 
75
- https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs
76
 
77
- SuperHOT Prototype [30b 8k ctx] by kaiokendev
78
 
79
- https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype
80
 
81
- ChanSung's GPT4-Alpaca-LoRA
82
- https://huggingface.co/chansung/gpt4-alpaca-lora-30b
83
 
84
- Neko-Institute-of-Science's Vicuna Unlocked LoRA (Checkpoint 46080)
85
- https://huggingface.co/Neko-Institute-of-Science/VicUnLocked-30b-LoRA
86
 
87
- Also thanks to Meta for LLaMA.
88
 
89
  Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
90
  Thanks to each and every one of you for your incredible work developing some of the best things
 
50
  --Finalized Logic and Creativity Segments (MK2):
51
 
52
 
53
+ After a few key meetings with our top teams of memegineers we drafted Thorns MK2, which was prompty fast tracked for production by the Roko's Basilisk Shadow Council.
54
+
55
+ ..Actually I just redid the merge like this:
56
 
57
 
58
  -Model Merge Ensemble Key-
 
60
  ({({NousHermes+Chronos}[Kimiko])+({Platupus+AiroborosM2.0}[Janine])}{Holodeck[LIMA RP]})
61
 
62
 
63
+ ## Findings:
64
+
65
+ -Strategically fusing LoRAs to models that stand to gain the most from them and then merging the result into the ensemble is exceptionally effective.
66
+
67
+
68
+ -Stacking the exact same LoRAs onto one model then merging that into the ensemble results in noisy garbage.
69
+
70
 
71
  ## Language Models and LoRAs Used Credits:
72
 
 
73
 
74
+ All models and adapters used are LLaMAv2-13B.
75
+
76
+ # Models:
77
+
78
+ Nous-Hermes
79
+
80
+ Chronos
81
+
82
+ Platypus
83
 
84
+ Airoboros
85
 
86
+ Holodeck
87
 
88
+ # Adapters:
89
 
90
+ Kimiko
91
 
92
+ Janine
93
 
94
+ LIMA RP
95
 
 
 
96
 
97
+ Also thanks to Meta for LLaMAv2 and deciding to allow the research community at large to benefit from their incredible work.
 
98
 
 
99
 
100
  Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
101
  Thanks to each and every one of you for your incredible work developing some of the best things