FrenzyBiscuit commited on
Commit
749b3e0
·
verified ·
1 Parent(s): 8ea1acb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -37
README.md CHANGED
@@ -1,42 +1,11 @@
1
  ---
2
- base_model:
3
- - TareksLab/L2-STEP3a
4
- - TareksLab/L2-MERGE2a
5
  library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
-
10
  ---
11
- # merge
12
-
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
-
15
- ## Merge Details
16
- ### Merge Method
17
-
18
- This model was merged using the [NearSwap](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001) merge method using [TareksLab/L2-STEP3a](https://huggingface.co/TareksLab/L2-STEP3a) as a base.
19
-
20
- ### Models Merged
21
-
22
- The following models were included in the merge:
23
- * [TareksLab/L2-MERGE2a](https://huggingface.co/TareksLab/L2-MERGE2a)
24
-
25
- ### Configuration
26
 
27
- The following YAML configuration was used to produce this model:
28
 
29
- ```yaml
30
- models:
31
- - model: TareksLab/L2-MERGE2a
32
- - model: TareksLab/L2-STEP3a
33
- merge_method: nearswap
34
- base_model: TareksLab/L2-STEP3a
35
- parameters:
36
- t:
37
- - value: 0.0001
38
- dtype: float32
39
- out_dtype: bfloat16
40
- tokenizer:
41
- source: base
42
- ```
 
1
  ---
2
+ base_model:
3
+ - TareksTesting/Legion-V2.3-LLaMa-70B
 
4
  library_name: transformers
5
+ base_model_relation: quantized
6
+ license: apache-2.0
 
 
7
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
+ EXL2 Quants by FrenzyBiscuit.
10
 
11
+ This model is 4.0 BPW EXL2.