--- base_model: - Steelskull/L3.3-Electra-R1-70b - mlabonne/Hermes-3-Llama-3.1-70B-lorablated - migtissera/Tess-3-Llama-3.1-70B library_name: transformers tags: - mergekit - merge --- # about A series of experiment of "empowerement" of models with my usual stabilizator (L3.1 70B Hermes 3 lorablated and its finetunes) and recently discovered perplexity-dropper (L3.1 70B Tess 3) - This version is based on SteelSkull's Electra R1, my new favorite merge. - And it gives, benchmark-wise, good results, especially on ARC-C --- # benchs - PPL 512 Wikitext Eng : 3.10 (very good) - ARC-C : 65.90 (I never merged anything >= 64 before, and that's already very high, most of L3 merges are between 53 and 59. 64 +3% is beyond the margin of error, this might be a record.) - ARC-E : 83.85 (This is the usual plateau of my best merges) --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Steelskull/L3.3-Electra-R1-70b](https://huggingface.co/Steelskull/L3.3-Electra-R1-70b) as a base. ### Models Merged The following models were included in the merge: * [mlabonne/Hermes-3-Llama-3.1-70B-lorablated](https://huggingface.co/mlabonne/Hermes-3-Llama-3.1-70B-lorablated) * [migtissera/Tess-3-Llama-3.1-70B](https://huggingface.co/migtissera/Tess-3-Llama-3.1-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: model_stock models: - model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated parameters: weight: 1.0 - model: migtissera/Tess-3-Llama-3.1-70B parameters: weight: 1.0 base_model: Steelskull/L3.3-Electra-R1-70b dtype: bfloat16 out_dtype: bfloat16 parameters: int8_mask: true normalize: true rescale: false chat_template: auto tokenizer: source: union ``` Same as ```yaml merge_method: model_stock models: - model: Steelskull/L3.3-Electra-R1-70b parameters: weight: 1.0 - model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated parameters: weight: 1.0 - model: migtissera/Tess-3-Llama-3.1-70B parameters: weight: 1.0 base_model: Steelskull/L3.3-Electra-R1-70b dtype: bfloat16 out_dtype: bfloat16 parameters: int8_mask: true normalize: true rescale: false filter_wise: false chat_template: auto tokenizer: source: union ``` Same as ```yaml merge_method: model_stock models: - model: Steelskull/L3.3-Electra-R1-70b parameters: weight: 1.0 - model: migtissera/Tess-3-Llama-3.1-70B parameters: weight: 1.0 - model: mlabonne/Hermes-3-Llama-3.1-70B-lorablated parameters: weight: 1.0 base_model: Steelskull/L3.3-Electra-R1-70b dtype: bfloat16 out_dtype: bfloat16 parameters: int8_mask: true normalize: true rescale: false filter_wise: false chat_template: auto tokenizer: source: union ```