Masterjp123 commited on
Commit
4228a2c
1 Parent(s): 48297fe

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Sao10K/L3-Solana-8B-v1
4
+ - Masterjp123/Llama-3-SnowyRP-8B-V1
5
+ - kuotient/Meta-Llama-3-8B-Instruct
6
+ - cgato/L3-TheSpice-8b-v0.1.3
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+
12
+ ---
13
+ # merged
14
+ This is the unquantized Seccond version of SnowyRP using LLama3, added some models will add a EXL2 Quant.
15
+
16
+ [EXL2](https://huggingface.co/Masterjp123/Llama-3-SnowyRP-8B-V1-B-4.5bit)
17
+
18
+ ## Merge Details
19
+ For this model I used the most uncensored and smartest finetunes of LLama 3, since LLama 3 has the largest ammount of training tokens I know of, so I expect it to have more than enough of a knowledge base.
20
+
21
+ This model is under both the LLama 3 licince and a few other ones that only affect conmerical uses, But I am not a lawer so use in commerical settings at your own risk.
22
+
23
+ ## Model Use:
24
+
25
+ Here is an modified Alpaca prompt I use with this model:
26
+ ```
27
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
28
+
29
+ ### Instruction: Take the role of {{char}} in a play that leaves a lasting impression on {{user}}. Write {{char}}'s next reply. Never skip or gloss over {{char}}’s actions. Progress the scene at a naturally slow pace. Avoid repetition, don't loop. Develop the plot slowly, always stay in character. Describe all actions in full, elaborate, explicit, graphic, and vivid detail. Mention all relevant sensory perceptions.
30
+ ```
31
+ ### Merge Method
32
+
33
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [kuotient/Meta-Llama-3-8B-Instruct](https://huggingface.co/kuotient/Meta-Llama-3-8B-Instruct) as a base.
34
+
35
+ ### Models Merged
36
+
37
+ The following models were included in the merge:
38
+ * [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
39
+ * [Masterjp123/Llama-3-SnowyRP-8B-V1](https://huggingface.co/Masterjp123/Llama-3-SnowyRP-8B-V1)
40
+ * [cgato/L3-TheSpice-8b-v0.1.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.1.3)
41
+
42
+ ### Configuration
43
+
44
+ The following YAML configuration was used to produce this model:
45
+
46
+ ```yaml
47
+ base_model: kuotient/Meta-Llama-3-8B-Instruct
48
+ dtype: float16
49
+ merge_method: ties
50
+ parameters:
51
+ int8_mask: 1.0
52
+ normalize: 1.0
53
+ slices:
54
+ - sources:
55
+ - layer_range: [0, 32]
56
+ model: Masterjp123/Llama-3-SnowyRP-8B-V1
57
+ parameters:
58
+ density: [1.0, 0.7, 0.1]
59
+ weight: 1.0
60
+ - layer_range: [0, 32]
61
+ model: cgato/L3-TheSpice-8b-v0.1.3
62
+ parameters:
63
+ density: 0.5
64
+ weight: [0.0, 0.3, 0.7, 1.0]
65
+ - layer_range: [0, 32]
66
+ model: Sao10K/L3-Solana-8B-v1
67
+ parameters:
68
+ density: 0.33
69
+ weight:
70
+ - filter: mlp
71
+ value: 0.5
72
+ - value: 0.0
73
+ - layer_range: [0, 32]
74
+ model: kuotient/Meta-Llama-3-8B-Instruct
75
+ ```