DavidAU commited on
Commit
8a5ab6d
·
verified ·
1 Parent(s): 7e1036f

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -44
README.md DELETED
@@ -1,44 +0,0 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- # L3.1-RP-Hero-8B-3
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using G:/7B/Llama-3.1-8B-DarkIdol-Instruct-1.2-Uncensored as a base.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * G:/7B/L3-Umbral-Mind-RP-v0.3-8B
22
- * G:/7B/Llama-3.1-8B-ArliAI-RPMax-v1.1
23
- * G:/7B/L3-Pantheon-RP-1.0-8b
24
-
25
- ### Configuration
26
-
27
- The following YAML configuration was used to produce this model:
28
-
29
- ```yaml
30
- models:
31
- - model: G:/7B/L3-Pantheon-RP-1.0-8b
32
- parameters:
33
- weight: [1,1,.75,.5,.25,.25,.05,.01]
34
- - model: G:/7B/L3-Umbral-Mind-RP-v0.3-8B
35
- parameters:
36
- weight: [0,0,.25,.35,.4,.25,.30,.04]
37
- - model: G:/7B/Llama-3.1-8B-ArliAI-RPMax-v1.1
38
- parameters:
39
- weight: [0,0,0,.15,.35,.5,.65,.95]
40
- merge_method: dare_ties
41
- base_model: G:/7B/Llama-3.1-8B-DarkIdol-Instruct-1.2-Uncensored
42
- dtype: bfloat16
43
-
44
- ```