llama-3.1-120b-instruct

This is a merge of pre-trained language models created using mergekit. It's a recreation of mlabonne/Meta-Llama-3-120B-Instruct but using Llama 3.1 70b instead of Llama 3, with the same configuration.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
- sources:
  - layer_range: [0, 20]
    model: meta-llama/Llama-3.1-70B-Instruct
- sources:
  - layer_range: [10, 30]
    model: meta-llama/Llama-3.1-70B-Instruct
- sources:
  - layer_range: [20, 40]
    model: meta-llama/Llama-3.1-70B-Instruct
- sources:
  - layer_range: [30, 50]
    model: meta-llama/Llama-3.1-70B-Instruct
- sources:
  - layer_range: [40, 60]
    model: meta-llama/Llama-3.1-70B-Instruct
- sources:
  - layer_range: [50, 70]
    model: meta-llama/Llama-3.1-70B-Instruct
- sources:
  - layer_range: [60, 80]
    model: meta-llama/Llama-3.1-70B-Instruct
merge_method: passthrough
dtype: float16
Downloads last month
9
Safetensors
Model size
122B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for deltanym/llama-3.1-120b-instruct

Finetuned
(60)
this model
Quantizations
2 models