metadata
license: apache-2.0
base_model:
- nbeerbower/flammen11-mistral-7B
- nbeerbower/flammen15-gutenberg-DPO-v1-7B
- nbeerbower/Flammen-Kunoichi-7B
- nbeerbower/flammen11X-mistral-7B
- nbeerbower/Maidphin-Kunoichi-7B
- nbeerbower/Suppe-v1-7B
- nbeerbower/flammen10-mistral-7B
- nbeerbower/bruphin-lambda
- nbeerbower/flammen13-mistral-7B
- nbeerbower/bophades-mistral-truthy-DPO-7B
- nbeerbower/flammen17-mistral-7B
- nbeerbower/bophades-mistral-math-DPO-7B
library_name: transformers
tags:
- mergekit
- merge
flammen18-mistral-7B
A Mistral 7B LLM built from merging pretrained models and finetuning. Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using nbeerbower/flammen17-mistral-7B as a base.
Models Merged
The following models were included in the merge:
- nbeerbower/flammen11-mistral-7B
- nbeerbower/flammen15-gutenberg-DPO-v1-7B
- nbeerbower/Flammen-Kunoichi-7B
- nbeerbower/flammen11X-mistral-7B
- nbeerbower/Maidphin-Kunoichi-7B
- nbeerbower/Suppe-v1-7B
- nbeerbower/flammen10-mistral-7B
- nbeerbower/bruphin-lambda
- nbeerbower/flammen13-mistral-7B
- nbeerbower/bophades-mistral-truthy-DPO-7B
- nbeerbower/bophades-mistral-math-DPO-7B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: nbeerbower/bophades-mistral-truthy-DPO-7B
- model: nbeerbower/flammen11-mistral-7B
- model: nbeerbower/Suppe-v1-7B
- model: nbeerbower/bophades-mistral-math-DPO-7B
- model: nbeerbower/flammen10-mistral-7B
- model: nbeerbower/flammen15-gutenberg-DPO-v1-7B
- model: nbeerbower/bruphin-lambda
- model: nbeerbower/Maidphin-Kunoichi-7B
- model: nbeerbower/flammen11X-mistral-7B
- model: nbeerbower/flammen13-mistral-7B
- model: nbeerbower/Flammen-Kunoichi-7B
merge_method: model_stock
base_model: nbeerbower/flammen17-mistral-7B
dtype: bfloat16