File size: 1,912 Bytes
6b5e7e0 60b3dbd 00e7f35 6b5e7e0 3e7f34a 7ab254c 14cc45d 14f55c3 6b5e7e0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 |
---
base_model:
- darkc0de/XortronCriminalComputingConfig
- TheDrummer/Cydonia-24B-v3
- Sorawiz/MistralCreative-24B-Chat
library_name: transformers
tags:
- mergekit
- merge
---
~ The power of Three
.png)
# Trifecta-Max-24b
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
# Recommended ST preset for RP :
- [Sphiratrioth](https://huggingface.co/sphiratrioth666/SillyTavern-Presets-Sphiratrioth)
# ☕ Support My Work
If you like my work, consider [buying me a coffee](https://ko-fi.com/entropicengine) to support future merges, GPU time, and experiments.
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [darkc0de/XortronCriminalComputingConfig](https://huggingface.co/darkc0de/XortronCriminalComputingConfig) as a base.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Cydonia-24B-v3](https://huggingface.co/TheDrummer/Cydonia-24B-v3)
* [Sorawiz/MistralCreative-24B-Chat](https://huggingface.co/Sorawiz/MistralCreative-24B-Chat)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: darkc0de/XortronCriminalComputingConfig
chat_template: auto
merge_method: dare_ties
modules:
default:
slices:
- sources:
- layer_range: [0, 40]
model: darkc0de/XortronCriminalComputingConfig
parameters:
weight: 0.4
- layer_range: [0, 40]
model: Sorawiz/MistralCreative-24B-Chat
parameters:
weight: 0.3
- layer_range: [0, 40]
model: TheDrummer/Cydonia-24B-v3
parameters:
weight: 0.3
out_dtype: bfloat16
parameters:
density: 1.0
tokenizer: {}
```
|