Mistralified: Barycentric based embedding swap applied with token surgery + config change. Uses Captain_BMO as the donor model ~ [no additional training]
Original Models used in the merge:
Epiculous/Violet_Twilight-v0.2 Nitral-AI/Captain_BMO-12B
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Nitral-AI/Captain_BMO-12B
layer_range: [0, 40]
- model: Epiculous/Violet_Twilight-v0.2
layer_range: [0, 40]
merge_method: slerp
base_model: Nitral-AI/Captain_BMO-12B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.420
dtype: bfloat16
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.