merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the passthrough merge method using hfl/llama-3-chinese-8b-instruct-v2 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: "hfl/llama-3-chinese-8b-instruct-v2"
layer_range: [0, 10]
- sources:
- model: "hfl/llama-3-chinese-8b-instruct-v2"
layer_range: [7, 17]
- sources:
- model: "hfl/llama-3-chinese-8b-instruct-v2"
layer_range: [13, 23]
- sources:
- model: "hfl/llama-3-chinese-8b-instruct-v2"
layer_range: [18, 28]
- sources:
- model: "hfl/llama-3-chinese-8b-instruct-v2"
layer_range: [22, 32]
merge_method: passthrough
base_model: "hfl/llama-3-chinese-8b-instruct-v2"
dtype: bfloat16
- Downloads last month
- 34
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for wwe180/Llama3-12B-Chinese-lingyang
Base model
meta-llama/Meta-Llama-3-8B-Instruct
Finetuned
hfl/llama-3-chinese-8b-instruct-v2