Calliope-7b
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the rescaled_sample merge method using mistralai/Mistral-7B-v0.1 as a base.
Models Merged
The following models were included in the merge:
- Eric111/openchat-3.5-0106-128k-DPO_dpo-binarized-NeuralTrix-7B
- Nexusflow/Starling-LM-7B-beta
- paulml/OmniBeagleSquaredMBX-v3-7B
- chargoddard/servile-harpsichord-cdpo
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Nexusflow/Starling-LM-7B-beta
parameters:
weight: 0.5
density: 0.5
- model: Eric111/openchat-3.5-0106-128k-DPO_dpo-binarized-NeuralTrix-7B
parameters:
weight: 0.3
density: 0.3
- model: paulml/OmniBeagleSquaredMBX-v3-7B
parameters:
weight: 0.2
density: 0.2
- model: chargoddard/servile-harpsichord-cdpo
parameters:
weight: 0.3
density: 0.3
merge_method: rescaled_sample
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
- Downloads last month
- 28
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.