bartowski commited on
Commit
c9073fc
1 Parent(s): 5881bad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -15
README.md CHANGED
@@ -19,26 +19,19 @@ pipeline_tag: text-generation
19
 
20
  Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.16">turboderp's ExLlamaV2 v0.0.16</a> for quantization.
21
 
22
- ## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
23
 
24
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
25
 
26
- Conversion was done using the default calibration dataset.
27
-
28
- Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
29
-
30
  Original model: https://huggingface.co/mlabonne/Beyonder-4x7B-v3
31
 
32
-
33
- <a href="https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2/tree/8_0">8.0 bits per weight</a>
34
-
35
- <a href="https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2/tree/6_5">6.5 bits per weight</a>
36
-
37
- <a href="https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2/tree/5_0">5.0 bits per weight</a>
38
-
39
- <a href="https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2/tree/4_25">4.25 bits per weight</a>
40
-
41
- <a href="https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2/tree/3_5">3.5 bits per weight</a>
42
 
43
 
44
  ## Download instructions
 
19
 
20
  Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.16">turboderp's ExLlamaV2 v0.0.16</a> for quantization.
21
 
22
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
23
 
24
  Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
25
 
 
 
 
 
26
  Original model: https://huggingface.co/mlabonne/Beyonder-4x7B-v3
27
 
28
+ | Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
29
+ | ------ | ---- | ------------ | ---- | ---- | ---- | ----------- |
30
+ | [8_0](https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2/tree/8_0) | 8.0 | 8.0 | 24.8 GB | 26.3 GB | 28.3 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
31
+ | [6_5](https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2/tree/6_5) | 6.5 | 8.0 | 20.3 GB | 21.8 GB | 23.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
32
+ | [5_0](https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2/tree/5_0) | 5.0 | 6.0 | 15.8 GB | 17.3 GB | 19.3 GB | Slightly lower quality vs 6.5. |
33
+ | [4_25](https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2/tree/4_25) | 4.25 | 6.0 | 14.0 GB | 15.5 GB | 17.5 GB | GPTQ equivalent bits per weight, slightly higher quality, great for 16gb cards with 16k context. |
34
+ | [3_5](https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2/tree/3_5) | 3.5 | 6.0 | 11.3 GB | 12.8 GB | 14.8 GB | Lower quality, not recommended, only suitable for 12GB cards. |
 
 
 
35
 
36
 
37
  ## Download instructions