--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored - aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored - Orenguteng/Llama-3-8B-Lexi-Uncensored - aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored --- # Llama-3.1-8b-Uncensored-Dare Llama-3.1-8b-Uncensored-Dare is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored](https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored) * [aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored](https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored) * [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) * [aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored](https://huggingface.co/aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored) ## 🧩 Configuration \```yaml models: - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored - model: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.0-Uncensored parameters: density: 0.53 weight: 0.4 - model: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored parameters: density: 0.53 weight: 0.3 - model: Orenguteng/Llama-3-8B-Lexi-Uncensored parameters: density: 0.53 weight: 0.2 - model: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.1-Uncensored parameters: density: 0.53 weight: 0.1 merge_method: dare_ties base_model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored parameters: int8_mask: true dtype: bfloat16 \```