R136a1 commited on
Commit
ed66c38
1 Parent(s): 7b3a20d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -21
README.md CHANGED
@@ -1,35 +1,33 @@
1
- ---
2
- license: other
3
- language:
4
- - en
5
- ---
6
- An improved, potentially even perfected variant of MythoMix, my [MythoLogic-L2](https://huggingface.co/Gryphe/MythoLogic-L2-13b) and [Huginn](https://huggingface.co/The-Face-Of-Goonery/Huginn-13b-FP16) merge using a highly experimental tensor type merge technique. The main difference with MythoMix is that I allowed more of Huginn to intermingle with the single tensors located at the front and end of a model, resulting in increased coherency across the entire structure.
7
 
8
- The script and the acccompanying templates I used to produce both can [be found here](https://github.com/Gryphe/BlockMerge_Gradient/tree/main/YAML).
9
 
10
- This model is proficient at both roleplaying and storywriting due to its unique nature.
11
 
12
- Quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) (You're the best!)
13
 
14
  ## Model details
15
 
16
- The idea behind this merge is that each layer is composed of several tensors, which are in turn responsible for specific functions. Using MythoLogic-L2's robust understanding as its input and Huginn's extensive writing capability as its output seems to have resulted in a model that exceeds at both, confirming my theory. (More details to be released at a later time)
 
 
 
 
 
 
 
17
 
18
- This type of merge is incapable of being illustrated, as each of its 363 tensors had an unique ratio applied to it. As with my prior merges, gradients were part of these ratios to further finetune its behaviour.
19
 
20
  ## Prompt Format
21
 
22
- This model primarily uses Alpaca formatting, so for optimal model performance, use:
23
  ```
24
- <System prompt/Character Card>
25
-
26
  ### Instruction:
27
- Your instruction or question here.
28
- For roleplay purposes, I suggest the following - Write <CHAR NAME>'s next reply in a chat between <YOUR NAME> and <CHAR NAME>. Write a single reply only.
 
 
29
 
30
  ### Response:
31
- ```
32
-
33
- ---
34
- license: other
35
- ---
 
1
+ [EXL2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Quantization of [Gryphe's MythoMax L2 13B](https://huggingface.co/Gryphe/MythoMax-L2-13b).
2
+
3
+ Other quantized models are available from TheBloke: [GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML) - [GPTQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ) - [GGUF](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGUF) - [AWQ](https://huggingface.co/TheBloke/MythoMax-L2-13B-AWQ)
4
+
 
 
5
 
 
6
 
 
7
 
 
8
 
9
  ## Model details
10
 
11
+ | Branch | Bits | Perplexity | Desc |
12
+ |----------------------------------------------------------------------|------|------------|---------------------------------------------------------|
13
+ | [main](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/main) | 5 | 6.1018 | Up to 6144 context size on T4 GPU |
14
+ | [6bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/6bit) | 6 | 6.1182 | 4096 context size (tokens) on T4 GPU |
15
+ | [3bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/3bit) | 3 | 6.3666 | Low bits quant while still good |
16
+ | [4bit](https://huggingface.co/R136a1/MythoMax-L2-13B-exl2/tree/4bit) | 4 | 6.1601 | Slightly better than 4bit GPTQ, ez 8K context on T4 GPU |
17
+ | - | 7 | 6.1056 | 2048 max context size for T4 GPU |
18
+ | - | 8 | 6.1027 | Just, why? |
19
 
20
+ I'll upload the 7 and 8 bits quant if someone request it. (Idk y the 5 bits quant preplexity is lower than higher bits quant, I think I did something wrong?)
21
 
22
  ## Prompt Format
23
 
24
+ Alpaca format:
25
  ```
 
 
26
  ### Instruction:
27
+
28
+
29
+
30
+
31
 
32
  ### Response:
33
+ ```