--- base_model: [] library_name: transformers tags: - mergekit - merge --- # L3.1-RP-Hero-8B-3 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using G:/7B/Llama-3.1-8B-DarkIdol-Instruct-1.2-Uncensored as a base. ### Models Merged The following models were included in the merge: * G:/7B/L3-Umbral-Mind-RP-v0.3-8B * G:/7B/Llama-3.1-8B-ArliAI-RPMax-v1.1 * G:/7B/L3-Pantheon-RP-1.0-8b ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: G:/7B/L3-Pantheon-RP-1.0-8b parameters: weight: [1,1,.75,.5,.25,.25,.05,.01] - model: G:/7B/L3-Umbral-Mind-RP-v0.3-8B parameters: weight: [0,0,.25,.35,.4,.25,.30,.04] - model: G:/7B/Llama-3.1-8B-ArliAI-RPMax-v1.1 parameters: weight: [0,0,0,.15,.35,.5,.65,.95] merge_method: dare_ties base_model: G:/7B/Llama-3.1-8B-DarkIdol-Instruct-1.2-Uncensored dtype: bfloat16 ```