R136a1's picture
Update README.md
ddbb903
|
raw
history blame
1.96 kB
metadata
license: other
language:
  - en

ExLlamaV2 Models of Gryphe's MythoMax L2 13B.

Other quantized models are available from TheBloke: GGML - GPTQ - GGUF - AWQ

Model details

Branch Bits Perplexity Desc
main 5 idk, forgot Idk why I made this, 1st try
4
6.5 6.1074 Can run 4096 context size (tokens) on T4 GPU
7 6.1056 2048 max context size for T4 GPU
8 6.1027 Just, why?

To be updated

Prompt Format

This model primarily uses Alpaca formatting, so for optimal model performance, use:

<System prompt/Character Card>

### Instruction:
Your instruction or question here.
For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.

### Response:

license: other