R136a1 commited on
Commit
3f0a176
1 Parent(s): ddbb903

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -13
README.md CHANGED
@@ -9,25 +9,21 @@ Other quantized models are available from TheBloke: [GGML](https://huggingface.c
9
 
10
  ## Model details
11
 
12
- | **Branch** | **Bits** | **Perplexity** | **Desc** |
13
- |----------------------------------------------------------------------|----------|----------------|----------------------------------------------|
14
- | [main](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/main) | 5 | idk, forgot | Idk why I made this, 1st try |
15
- | | 4 | | |
16
- | | 6.5 | 6.1074 | Can run 4096 context size (tokens) on T4 GPU |
17
- | | 7 | 6.1056 | 2048 max context size for T4 GPU |
18
- | | 8 | 6.1027 | Just, why? |
19
 
20
- To be updated
21
 
22
  ## Prompt Format
23
 
24
- This model primarily uses Alpaca formatting, so for optimal model performance, use:
25
  ```
26
- <System prompt/Character Card>
27
-
28
  ### Instruction:
29
- Your instruction or question here.
30
- For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
31
 
32
  ### Response:
33
  ```
 
9
 
10
  ## Model details
11
 
12
+ | **Branch** | **Bits** | **Perplexity** |
13
+ |----------------------------------------------------------------------|----------|----------------|
14
+ | [main](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/main) | 5 | 6.1018 |
15
+ | [6bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/6bit) | 6 | 6.1182 |
16
+ | - | 7 | 6.1056 |
17
+ | - | 8 | 6.1027 |
 
18
 
19
+ I'll upload the 7 and 8 bits quant if someone request it.
20
 
21
  ## Prompt Format
22
 
23
+ Alpaca format:
24
  ```
 
 
25
  ### Instruction:
26
+
 
27
 
28
  ### Response:
29
  ```