--- license: other language: - en --- [ExLlamaV2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Models of [Gryphe's MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b). Other quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) - [GGUF](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF) - [AWQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-AWQ) ## Model details | **Branch** | **Bits** | **Perplexity** | |----------------------------------------------------------------------|----------|----------------| | [main](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/main) | 5 | 6.1018 | | [6bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/6bit) | 6 | 6.1182 | | - | 7 | 6.1056 | | - | 8 | 6.1027 | I'll upload the 7 and 8 bits quant if someone request it. ## Prompt Format Alpaca format: ``` ### Instruction: ### Response: ``` --- license: other ---