Nexesenex_Llama_3.x_70b_Electra-Legion_fusion_v2 - EXL2
Collection
Quants by ArtusDev
•
1 item
•
Updated
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Arcee Fusion merge method using Steelskull/L3.3-Electra-R1-70b as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: arcee_fusion
models:
- model: Steelskull/L3.3-Electra-R1-70b
parameters:
weight: 1.0
- model: Tarek07/Legion-V2.1-LLaMa-70B
parameters:
weight: 1.0
base_model: Steelskull/L3.3-Electra-R1-70b
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
chat_template: auto
tokenizer:
source: union