merge2

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the SCE merge method using /kaggle/input/meta-llama-3-8b/transformers/hf/1 as a base.

Models Merged

The following models were included in the merge:

  • /kaggle/input/llama-3-youko-8b/transformers/hf/1
  • /kaggle/input/llama-3-swallow-8b-v0.1/transformers/hf/1
  • /kaggle/input/meta-llama-3-8b-instruct/transformers/hf/1

Configuration

The following YAML configuration was used to produce this model:


models:
  # Pivot model
  - model: /kaggle/input/meta-llama-3-8b/transformers/hf/1
  # Target models
  - model: /kaggle/input/meta-llama-3-8b-instruct/transformers/hf/1
  - model: /kaggle/input/llama-3-youko-8b/transformers/hf/1
  - model: /kaggle/input/llama-3-swallow-8b-v0.1/transformers/hf/1
merge_method: sce
base_model: /kaggle/input/meta-llama-3-8b/transformers/hf/1
parameters:
  select_topk: 0.65
dtype: bfloat16
Downloads last month
20
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Casual-Autopsy/Llama-3-Yollow-SCE

Merges
2 models
Quantizations
2 models