~ We are Legion...
My biggest merge yet, consisting of a total of 15 specially curated models. My methodology in approaching this was to create 5 highly specialized models:
- A very coherent but completely uncensored base
- A very intelligent model based on UGI, Willingness and NatInt scores on the UGI Leaderboard
- A highly descriptive writing model, specializing in creative and natural prose
- A RP model specially merged with fine-tuned models that use a lot of RP datasets
- The secret ingredient: A completely unhinged, uncensored final model
These five models went through a series of iterations until I got something I thought worked well and then combined them to make LEGION.
The full list of models used in this merge is below:
- TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- Sao10K/L3-70B-Euryale-v2.1
- SicariusSicariiStuff/Negative_LLAMA_70B
- allura-org/Bigger-Body-70b
- Sao10K/70B-L3.3-mhnnn-x1
- Sao10K/L3.3-70B-Euryale-v2.3
- Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
- Sao10K/L3.1-70B-Hanami-x1
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- TheDrummer/Anubis-70B-v1
- ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- NeverSleep/Lumimaid-v0.2-70B
- ReadyArt/Forgotten-Safeword-70B-3.6
- huihui-ai/Llama-3.3-70B-Instruct-abliterated
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using TareksLab/M-BASE-SCE as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: TareksLab/M-MERGE4
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/M-MERGE2
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/M-MERGE3
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/M-MERGE1
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/M-BASE-SCE
parameters:
weight: 0.20
density: 0.5
merge_method: dare_ties
base_model: TareksLab/M-BASE-SCE
parameters:
normalize: false
out_dtype: bfloat16
tokenizer:
source: base
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for TareksTesting/Legion-V1.2-LLaMa-70B
Merge model
this model