I decided to make a simple model for a change, with some models I was curious to see work together.
models:
- model: ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large
parameters:
weight: 0.16
density: 0.7
epsilon: 0.20
- model: TheDrummer/Anubis-70B-v1.1
parameters:
weight: 0.17
density: 0.7
epsilon: 0.20
- model: Mawdistical/Vulpine-Seduction-70B
parameters:
weight: 0.16
density: 0.7
epsilon: 0.20
- model: Darkhn/L3.3-70B-Animus-V5-Pro
parameters:
weight: 0.17
density: 0.7
epsilon: 0.20
- model: zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B
parameters:
weight: 0.17
density: 0.7
epsilon: 0.20
- model: Sao10K/Llama-3.3-70B-Vulpecula-r1
parameters:
weight: 0.17
density: 0.7
epsilon: 0.20
merge_method: della_linear
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
lambda: 1.1
normalize: false
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: Sao10K/Llama-3.3-70B-Vulpecula-r1
pad_to_multiple_of: 8
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for Tarek07/Nomad-LLaMa-70B
Base model
nbeerbower/Llama-3.1-Nemotron-lorablated-70B