--- base_model: - darkc0de/XortronCriminalComputingConfig - OddTheGreat/Apparatus_24B - Entropicengine/Trifecta-Max-24b library_name: transformers tags: - mergekit - merge license: apache-2.0 --- ~ Alchemy of three ๐Ÿ”ฅ๐Ÿงช ~ ![image/png](https://huggingface.co/Entropicengine/DarkTriad-24b/resolve/main/dark-triad.png) # DarkTriad-24B # Recommended ST preset for RP : - [Sphiratrioth](https://huggingface.co/sphiratrioth666/SillyTavern-Presets-Sphiratrioth) # โ˜• Support My Work If you like my work, consider [buying me a coffee](https://ko-fi.com/entropicengine) to support future merges, GPU time, and experiments. This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [darkc0de/XortronCriminalComputingConfig](https://huggingface.co/darkc0de/XortronCriminalComputingConfig) as a base. ### Models Merged The following models were included in the merge: * [OddTheGreat/Apparatus_24B](https://huggingface.co/OddTheGreat/Apparatus_24B) * [Entropicengine/Trifecta-Max-24b](https://huggingface.co/Entropicengine/Trifecta-Max-24b) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: darkc0de/XortronCriminalComputingConfig chat_template: auto merge_method: dare_ties modules: default: slices: - sources: - layer_range: [0, 40] model: darkc0de/XortronCriminalComputingConfig parameters: weight: 0.4 - layer_range: [0, 40] model: Entropicengine/Trifecta-Max-24b parameters: weight: 0.3 - layer_range: [0, 40] model: OddTheGreat/Apparatus_24B parameters: weight: 0.3 out_dtype: bfloat16 parameters: density: 1.0 tokenizer: {} ```