--- base_model: - hitachi-nlp/Llama-3.1-70B-FLDx2 - nbeerbower/Llama3.1-Gutenberg-Doppel-70B - Tarek07/Legion-V2.1-LLaMa-70B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [Tarek07/Legion-V2.1-LLaMa-70B](https://huggingface.co/Tarek07/Legion-V2.1-LLaMa-70B) as a base. ### Models Merged The following models were included in the merge: * [hitachi-nlp/Llama-3.1-70B-FLDx2](https://huggingface.co/hitachi-nlp/Llama-3.1-70B-FLDx2) * [nbeerbower/Llama3.1-Gutenberg-Doppel-70B](https://huggingface.co/nbeerbower/Llama3.1-Gutenberg-Doppel-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Tarek07/Legion-V2.1-LLaMa-70B parameters: weight: 0.60 density: 0.6 - model: nbeerbower/Llama3.1-Gutenberg-Doppel-70B parameters: weight: 0.20 density: 0.4 - model: hitachi-nlp/Llama-3.1-70B-FLDx2 parameters: weight: 0.20 density: 0.4 merge_method: dare_ties base_model: Tarek07/Legion-V2.1-LLaMa-70B parameters: normalize: false out_dtype: bfloat16 chat_template: llama3 tokenizer: source: union ```