brucethemoose commited on
Commit
ec495f0
1 Parent(s): c319171

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -1
README.md CHANGED
@@ -1,5 +1,66 @@
1
  ---
2
  license: other
3
  license_name: yi-license
4
- license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE
 
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
  license_name: yi-license
4
+ license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
5
+ language:
6
+ - en
7
+ library_name: transformers
8
+ pipeline_tag: text-generation
9
  ---
10
+ NousResearch/Nous-Capybara-34B and migtissera/Tess-M-Creative-v1.0 ties merged with mergekit, then quantized with exllamav2 on 200 rows (400K tokens) on a long Vicuna format chat, a sci fi story and a fantasy story.
11
+
12
+ Quantized to 4bpw, enough for **~47K context on a 24GB GPU.**
13
+
14
+ The following merge config was used:
15
+ ```
16
+ models:
17
+ - model: /home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
18
+ # no parameters necessary for base model
19
+ - model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.0
20
+ parameters:
21
+ density: 0.6
22
+ weight: 1.0
23
+ - model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
24
+ parameters:
25
+ density: 0.6
26
+ weight: 1.0
27
+ merge_method: ties
28
+ base_model: //home/alpha/Storage/Models/Raw/larryvrh_Yi-34B-200K-Llamafied
29
+ parameters:
30
+ normalize: true
31
+ int8_mask: true
32
+ dtype: float16
33
+ ```
34
+
35
+ First exllama quantization pass:
36
+
37
+ `python convert.py --in_dir /home/alpha/FastModels/Capybara-Tess-Yi-34B-200K -o /home/alpha/FastModels/Capybara-Tess-Yi-34B-200K-exl2 -om /home/alpha/FastModels/capytessmes.json --cal_dataset /home/alpha/Documents/smol.parquet -l 2048 -r 80 -ml 2048 -mr 40 -gr 40 -ss 4096 -nr -b 3.5 -hb 6`
38
+
39
+ Second exllama quantization pass:
40
+
41
+ `python convert.py --in_dir /home/alpha/FastModels/Capybara-Tess-Yi-34B-200K -o /home/alpha/FastModels/Capybara-Tess-Yi-34B-200K-exl2 -m /home/alpha/FastModels/capytessmes.json --cal_dataset /home/alpha/Documents/medium.parquet -l 2048 -r 200 -ml 2048 -mr 40 -gr 200 -ss 4096 -b 3.1 -hb 6 -cf /home/alpha/FastModels/Capybara-Tess-Yi-34B-200K-exl2-31bpw -nr`
42
+
43
+ Both are 200K context models with Vicuna syntax, so:
44
+
45
+ # Prompt Format:
46
+
47
+ ```
48
+ SYSTEM: ...
49
+ USER: ...
50
+ ASSISTANT: ...
51
+ ```
52
+ Stop token: `</s>`
53
+
54
+ ***
55
+
56
+ Credits:
57
+
58
+ https://github.com/cg123/mergekit
59
+
60
+ https://huggingface.co/NousResearch/Nous-Capybara-34B/discussions
61
+
62
+ https://huggingface.co/migtissera/Tess-M-Creative-v1.0
63
+
64
+ https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied
65
+
66
+ https://huggingface.co/01-ai/Yi-34B-200K