--- base_model: - zhiyuanhucs/qwen2.5-7B-formula-1k-base-final-1-step280 - zhiyuanhucs/qwen2.5-7b-sequence-200-5 - zhiyuanhucs/qwen2.5-7b-backward-reasoning-7B-level-2-0310 library_name: transformers tags: - mergekit - merge --- # linear4 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [zhiyuanhucs/qwen2.5-7B-formula-1k-base-final-1-step280](https://huggingface.co/zhiyuanhucs/qwen2.5-7B-formula-1k-base-final-1-step280) * [zhiyuanhucs/qwen2.5-7b-sequence-200-5](https://huggingface.co/zhiyuanhucs/qwen2.5-7b-sequence-200-5) * [zhiyuanhucs/qwen2.5-7b-backward-reasoning-7B-level-2-0310](https://huggingface.co/zhiyuanhucs/qwen2.5-7b-backward-reasoning-7B-level-2-0310) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: zhiyuanhucs/qwen2.5-7B-formula-1k-base-final-1-step280 parameters: weight: 0.15 - model: zhiyuanhucs/qwen2.5-7b-sequence-200-5 parameters: weight: 0.15 - model: zhiyuanhucs/qwen2.5-7b-backward-reasoning-7B-level-2-0310 parameters: weight: 0.7 merge_method: linear dtype: float32 ```