llm-7B-slerp

llm-7B-slerp is a custom merge of:

using mergekit and the SLERP (Spherical Linear Interpolation) method.

This merge was designed to combine the biomedical reasoning strength of BioMistral/BioMistral-7B with the fine-tuned capabilities of OpenPipe/mistral-ft-optimized-1218.

🧩 Merge Configuration


slices:
  - sources:
      - model: BioMistral/BioMistral-7B
        layer_range: [0, 8]
      - model: HuggingFaceH4/zephyr-7b-beta
        layer_range: [0, 8]
merge_method: slerp
base_model: BioMistral/BioMistral-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
7
Safetensors
Model size
2.01B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support