merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using TareksLab/Polyglot-V2-LLaMa-70B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: TareksLab/Polyglot-V2-LLaMa-70B
merge_method: model_stock
dtype: bfloat16
models:
  - model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
  - model: NousResearch/Hermes-3-Llama-3.1-70B
  - model: pankajmathur/orca_mini_v8_1_70b
  - model: allenai/Llama-3.1-Tulu-3-70B
Downloads last month
62
Safetensors
Model size
70.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TareksLab/Erudite-V1-Leashed-LLaMA-70B