---
license: cc-by-nc-4.0
tags:
- conversational
- mixtral
- merge
- mergekit
---
```
e88 88e d8
d888 888b 8888 8888 ,"Y88b 888 8e d88
C8888 8888D 8888 8888 "8" 888 888 88b d88888
Y888 888P Y888 888P ,ee 888 888 888 888
"88 88" "88 88" "88 888 888 888 888
b
8b,
e88'Y88 d8 888
d888 'Y ,"Y88b 888,8, d88 ,e e, 888
C8888 "8" 888 888 " d88888 d88 88b 888
Y888 ,d ,ee 888 888 888 888 , 888
"88,d88 "88 888 888 888 "YeeP" 888
PROUDLY PRESENTS
```
# TeTO-MS-8x7b-exl2-rpcal
Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset.
Branches:
- `main` -- `measurement.json`
- `8b8h` -- 8bpw, 8bit lm_head
- `6b6h` -- 6bpw, 6bit lm_head
- `4b6h` -- 4bpw, 6bit lm_head
- `3b6h` -- 3bpw, 6bit lm_head
- `2.25b6h` -- 2.25bpw, 6bit lm_head
Original model link: [InferenceIllusionist/TeTO-MS-8x7b](https://huggingface.co/InferenceIllusionist/TeTO-MS-8x7b)
Original model README below.
-----
## TeTO-MS-8x7b
Tesoro + Typhon + OpenGPT
Presenting a Model Stock experiment combining the unique strengths from the following 8x7b Mixtral models:
* Tess-2.0-Mixtral-8x7B-v0.2 / [migtissera](https://huggingface.co/migtissera) / General Purpose
* Typhon-Mixtral-v1 / [Sao10K](https://huggingface.co/Sao10K) / Creative & Story Completion
* Open_Gpt4_8x7B_v0.2 / [rombodawg](https://huggingface.co/rombodawg) / Conversational
Weighted (iMat) GGUFS: https://huggingface.co/Quant-Cartel/TeTO-MS-8x7b-iMat-GGUF
EXL2 rpcal courtesy of Quant Cartel: https://huggingface.co/Quant-Cartel/TeTO-MS-8x7b-exl2-rpcal
# Recommended Template
* Basic: Alpaca Format
* Advanced: See context/instruct/sampler settings in [our new Recommended Settings repo](https://huggingface.co/Quant-Cartel/Recommended-Settings/tree/main/Teto-MS-8x7b).
* Huge shout out to [rAIfle](https://huggingface.co/rAIfle) for his original work on the Wizard 8x22b templates which were modified for this model.