--- base_model: - SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B - s-emanuilov/LLMBG-Llama-3.1-8B-BG-Reasoning-v0.1 - OpenLLM-Ro/RoLlama3.1-8b-Instruct-2024-10-09 - grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter - DeepAuto-AI/Explore_Llama-3.1-8B-Inst - nvidia/Llama-3.1-Nemotron-Nano-8B-v1 - FreedomIntelligence/HuatuoGPT-o1-8B - HiTZ/Latxa-Llama-3.1-8B-Instruct - prithivMLmods/Llama-3.1-8B-Open-SFT - passing2961/Thanos-8B - arcee-ai/Llama-3.1-SuperNova-Lite library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [nvidia/Llama-3.1-Nemotron-Nano-8B-v1](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) as a base. ### Models Merged The following models were included in the merge: * [SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B](https://huggingface.co/SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B) * [s-emanuilov/LLMBG-Llama-3.1-8B-BG-Reasoning-v0.1](https://huggingface.co/s-emanuilov/LLMBG-Llama-3.1-8B-BG-Reasoning-v0.1) * [OpenLLM-Ro/RoLlama3.1-8b-Instruct-2024-10-09](https://huggingface.co/OpenLLM-Ro/RoLlama3.1-8b-Instruct-2024-10-09) * [grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter](https://huggingface.co/grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter) * [DeepAuto-AI/Explore_Llama-3.1-8B-Inst](https://huggingface.co/DeepAuto-AI/Explore_Llama-3.1-8B-Inst) * [FreedomIntelligence/HuatuoGPT-o1-8B](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-8B) * [HiTZ/Latxa-Llama-3.1-8B-Instruct](https://huggingface.co/HiTZ/Latxa-Llama-3.1-8B-Instruct) * [prithivMLmods/Llama-3.1-8B-Open-SFT](https://huggingface.co/prithivMLmods/Llama-3.1-8B-Open-SFT) * [passing2961/Thanos-8B](https://huggingface.co/passing2961/Thanos-8B) * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1 #no parameters necessary for base model - model: SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B parameters: density: 0.1 weight: 0.1 - model: arcee-ai/Llama-3.1-SuperNova-Lite parameters: density: 0.1 weight: 0.1 - model: passing2961/Thanos-8B parameters: density: 0.1 weight: 0.1 - model: prithivMLmods/Llama-3.1-8B-Open-SFT parameters: density: 0.1 weight: 0.1 - model: FreedomIntelligence/HuatuoGPT-o1-8B parameters: density: 0.1 weight: 0.1 - model: s-emanuilov/LLMBG-Llama-3.1-8B-BG-Reasoning-v0.1 parameters: density: 0.1 weight: 0.1 - model: HiTZ/Latxa-Llama-3.1-8B-Instruct parameters: density: 0.1 weight: 0.1 - model: grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter parameters: density: 0.1 weight: 0.1 - model: DeepAuto-AI/Explore_Llama-3.1-8B-Inst parameters: density: 0.1 weight: 0.1 - model: OpenLLM-Ro/RoLlama3.1-8b-Instruct-2024-10-09 parameters: density: 0.1 weight: 0.1 merge_method: ties base_model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1 parameters: normalize: false int8_mask: true dtype: bfloat16 ```