--- base_model: - prithivMLmods/QwQ-MathOct-7B - pe-nlp/R1-Qwen2.5-7B-Instruct - prithivMLmods/Viper-Coder-HybridMini-v1.3 - lkoenig/BBAI_230_Xiaqwen - Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview](https://huggingface.co/Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview) as a base. ### Models Merged The following models were included in the merge: * [prithivMLmods/QwQ-MathOct-7B](https://huggingface.co/prithivMLmods/QwQ-MathOct-7B) * [pe-nlp/R1-Qwen2.5-7B-Instruct](https://huggingface.co/pe-nlp/R1-Qwen2.5-7B-Instruct) * [prithivMLmods/Viper-Coder-HybridMini-v1.3](https://huggingface.co/prithivMLmods/Viper-Coder-HybridMini-v1.3) * [lkoenig/BBAI_230_Xiaqwen](https://huggingface.co/lkoenig/BBAI_230_Xiaqwen) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview #no parameters necessary for base model - model: prithivMLmods/Viper-Coder-HybridMini-v1.3 parameters: density: 0.25 weight: 0.25 - model: lkoenig/BBAI_230_Xiaqwen parameters: density: 0.25 weight: 0.25 - model: prithivMLmods/QwQ-MathOct-7B parameters: density: 0.25 weight: 0.25 - model: pe-nlp/R1-Qwen2.5-7B-Instruct parameters: density: 0.25 weight: 0.25 # Equal weight and density values for all models (following the OG Dyanka recipe) merge_method: ties base_model: Xiaojian9992024/Qwen2.5-Dyanka-7B-Preview parameters: normalize: false int8_mask: true dtype: bfloat16 ```