Trifecta-Max-24b / README.md
Entropicengine's picture
Update README.md
14cc45d verified
---
base_model:
- darkc0de/XortronCriminalComputingConfig
- TheDrummer/Cydonia-24B-v3
- Sorawiz/MistralCreative-24B-Chat
library_name: transformers
tags:
- mergekit
- merge
---
~ The power of Three
![image.png](https://huggingface.co/Entropicengine/Trifecta-Max-24b/resolve/main/generation-51e6b898-485d-43a0-9d6c-69fb24ad1c41%20(1).png)
# Trifecta-Max-24b
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
# Recommended ST preset for RP :
- [Sphiratrioth](https://huggingface.co/sphiratrioth666/SillyTavern-Presets-Sphiratrioth)
# ☕ Support My Work
If you like my work, consider [buying me a coffee](https://ko-fi.com/entropicengine) to support future merges, GPU time, and experiments.
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [darkc0de/XortronCriminalComputingConfig](https://huggingface.co/darkc0de/XortronCriminalComputingConfig) as a base.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Cydonia-24B-v3](https://huggingface.co/TheDrummer/Cydonia-24B-v3)
* [Sorawiz/MistralCreative-24B-Chat](https://huggingface.co/Sorawiz/MistralCreative-24B-Chat)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: darkc0de/XortronCriminalComputingConfig
chat_template: auto
merge_method: dare_ties
modules:
default:
slices:
- sources:
- layer_range: [0, 40]
model: darkc0de/XortronCriminalComputingConfig
parameters:
weight: 0.4
- layer_range: [0, 40]
model: Sorawiz/MistralCreative-24B-Chat
parameters:
weight: 0.3
- layer_range: [0, 40]
model: TheDrummer/Cydonia-24B-v3
parameters:
weight: 0.3
out_dtype: bfloat16
parameters:
density: 1.0
tokenizer: {}
```