--- base_model: - mistralai/Mistral-7B-Instruct-v0.2 - mistralai/Mistral-7B-v0.1 - Nondzu/Mistral-7B-codealpaca-lora - TIGER-Lab/MAmmoTH2-7B library_name: transformers tags: - mergekit - merge --- # task-wise-crossentropy This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Task Arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [Nondzu/Mistral-7B-codealpaca-lora](https://huggingface.co/Nondzu/Mistral-7B-codealpaca-lora) * [TIGER-Lab/MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml # task-wise LLM-AdaMerge with cross-entropy loss base_model: mistralai/Mistral-7B-v0.1 models: - model: mistralai/Mistral-7B-Instruct-v0.2 parameters: weight: 0.10558585822582245 - model: TIGER-Lab/MAmmoTH2-7B parameters: weight: 0.45740658044815063 - model: Nondzu/Mistral-7B-codealpaca-lora parameters: weight: 0.5316656231880188 merge_method: task_arithmetic parameters: normalize: false lambda: 1.0 dtype: float16 tokenizer: source: union ```