File size: 1,281 Bytes
3650027
 
 
 
 
92e1f11
3650027
1b3dfb9
3650027
 
 
3f0a176
 
 
 
 
 
537b7be
3f0a176
3650027
 
 
3f0a176
3650027
 
3f0a176
3650027
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
license: other
language:
- en
---
[ExLlamaV2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Models of [Gryphe's MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b).

Other quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) - [GGUF](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF) - [AWQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-AWQ)

## Model details

| **Branch**                                                           | **Bits** | **Perplexity** |
|----------------------------------------------------------------------|----------|----------------|
| [main](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/main) | 5        | 6.1018         |
| [6bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/6bit) | 6        | 6.1182         |
| -                                                                    | 7        | 6.1056         |
| -                                                                    | 8        | 6.1027         |

I'll upload the 7 and 8 bits quant if someone request it.

## Prompt Format

Alpaca format:
```
### Instruction:


### Response:
```
 
---
license: other
---