--- base_model: [] tags: - mergekit - merge - mistral - german - deutsch - english - roleplay - chatml language: - de - en --- # merge This is a experimental merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227) * [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1) #### Why this two models? Because both used models are, to my knowledge, the two best models when it comes to German language generation. DiscoLM German 7B is is up to this date (01/21/2024) by far the best German model and makes far fewer grammatical errors and his German generally sounds good. But it is finetuned on Mistral V0.2 or even V0.1. Mistral FT Optimized 1227 is much better in German than Mistral 7B V0.2 and other German fine-tuning models that make grammar errors in almost every sentence. But even that model is a good step behind DiscoLM German 7B and creates not so well formed German sentences. The ulterior motive was now combining this two models to get a even better German model, especially for German roleplay. ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1227 layer_range: [0, 32] - model: DiscoResearch/DiscoLM_German_7b_v1 layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1227 parameters: t: - value: [0.5, 0.9] dtype: bfloat16 ``` This settings are from the model [oshizo/japanese-e5-mistral-7b_slerp](https://huggingface.co/oshizo/japanese-e5-mistral-7b_slerp).