zaq-hack commited on
Commit
cd3e5c8
1 Parent(s): 329f065

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ language:
4
+ - en
5
+ tags:
6
+ - moe
7
+ ---
8
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/jgyhmI451GRXri5hEj3lh.png)
9
+ (Maybe i'll change the waifu picture later)
10
+
11
+ Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
12
+
13
+ [GGUF, Exl2](https://huggingface.co/collections/xxx777xxxASD/chaoticsoliloquy-4x8b-6628a759b5a60d8d3f51ed62)
14
+
15
+ ### ChaoticSoliloquy-4x8B
16
+ ```
17
+ base_model: jeiku_Chaos_RP_l3_8B
18
+ gate_mode: random
19
+ dtype: bfloat16
20
+ experts_per_token: 2
21
+ experts:
22
+ - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B
23
+ - source_model: jeiku_Chaos_RP_l3_8B
24
+ - source_model: openlynn_Llama-3-Soliloquy-8B
25
+ - source_model: Sao10K_L3-Solana-8B-v1
26
+ ```
27
+
28
+ ## Models used
29
+
30
+ - [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B)
31
+ - [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B)
32
+ - [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)
33
+ - [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
34
+
35
+ ## Vision
36
+
37
+ [llama3_mmproj](https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj)
38
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/yv4C6NalqORLjvY3KKZk8.png)
39
+
40
+
41
+ ## Prompt format: Llama 3