TheBloke commited on
Commit
ebbc10e
1 Parent(s): 0791f9d

Upload new GPTQs with varied parameters

Browse files
Files changed (1) hide show
  1. README.md +140 -39
README.md CHANGED
@@ -1,16 +1,20 @@
1
  ---
2
- license: other
3
  datasets:
4
  - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
5
  inference: false
 
 
 
 
6
  ---
 
7
  <!-- header start -->
8
  <div style="width: 100%;">
9
  <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
10
  </div>
11
  <div style="display: flex; justify-content: space-between; width: 100%;">
12
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
13
- <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
14
  </div>
15
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
16
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
@@ -18,60 +22,153 @@ inference: false
18
  </div>
19
  <!-- header end -->
20
 
21
- # WizardLM - uncensored: An Instruction-following LLM Using Evol-Instruct
 
 
 
 
22
 
23
- These files are GPTQ 4bit model files for [Eric Hartford's 'uncensored' version of WizardLM](https://huggingface.co/ehartford/WizardLM-30B-Uncensored).
24
 
25
- It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
26
 
27
- ## Other repositories available
 
 
 
 
 
 
 
 
 
 
 
28
 
29
- * [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GPTQ)
30
- * [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGML)
31
- * [Eric's unquantised model in HF format](https://huggingface.co/ehartford/WizardLM-30B-Uncensored)
32
 
33
- ## How to easily download and use this model in text-generation-webui
34
 
35
- Open the text-generation-webui UI as normal.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  1. Click the **Model tab**.
38
- 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-30B-Uncensored-GPTQ`.
 
 
39
  3. Click **Download**.
40
- 4. Wait until it says it's finished downloading.
41
- 5. Click the **Refresh** icon next to **Model** in the top left.
42
- 6. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-30B-Uncensored-GPTQ`.
43
- 7. If you see an error in the bottom right, ignore it - it's temporary.
44
- 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = None`, `model_type = Llama`
45
- 9. Click **Save settings for this model** in the top right.
46
- 10. Click **Reload the Model** in the top right.
47
- 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
48
 
49
- ## Provided files
 
 
 
 
50
 
51
- **Compatible file - WizardLM-30B-uncensored-GPTQ-4bit.act-order.safetensors**
52
 
53
- In the `main` branch - the default one - you will find `WizardLM-30B-uncensored-GPTQ-4bit.act-order.safetensors`
 
 
54
 
55
- This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
 
56
 
57
- It was created with the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui.
58
 
59
- * `wizard-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors`
60
- * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
61
- * Works with AutoGPTQ. Use `strict=False` to load.
62
- * Works with text-generation-webui one-click-installers
63
- * Parameters: Groupsize = None. act-order.
64
- * Command used to create the GPTQ:
65
- ```
66
- python llama.py /workspace/models/ehartford_WizardLM-30B-Uncensored wikitext2 --wbits 4 --true-sequential --act-order --save_safetensors /workspace/eric-30B/gptq/WizardLM-30B-Uncensored-GPTQ-4bit.act-order.safetensors
67
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
  <!-- footer start -->
70
  ## Discord
71
 
72
  For further support, and discussions on these models and AI in general, join us at:
73
 
74
- [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
75
 
76
  ## Thanks, and how to contribute.
77
 
@@ -86,18 +183,22 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
86
  * Patreon: https://patreon.com/TheBlokeAI
87
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
88
 
89
- **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
 
 
90
 
91
  Thank you to all my generous patrons and donaters!
 
92
  <!-- footer end -->
93
- # WizardLM-30B-Uncensored original model card
 
94
 
95
  This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
96
 
97
  Shout out to the open source AI/ML community, and everyone who helped me out.
98
 
99
- Note:
100
- An uncensored model has no guardrails.
101
  You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
102
  Publishing anything this model generates is the same as publishing it yourself.
103
  You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
 
1
  ---
 
2
  datasets:
3
  - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
4
  inference: false
5
+ license: other
6
+ model_type: llama
7
+ tags:
8
+ - uncensored
9
  ---
10
+
11
  <!-- header start -->
12
  <div style="width: 100%;">
13
  <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
14
  </div>
15
  <div style="display: flex; justify-content: space-between; width: 100%;">
16
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
17
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
18
  </div>
19
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
20
  <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
 
22
  </div>
23
  <!-- header end -->
24
 
25
+ # Eric Hartford's WizardLM 30B Uncensored GPTQ
26
+
27
+ These files are GPTQ model files for [Eric Hartford's WizardLM 30B Uncensored](https://huggingface.co/ehartford/WizardLM-30B-Uncensored).
28
+
29
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
30
 
31
+ These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate).
32
 
33
+ ## Repositories available
34
 
35
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GPTQ)
36
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GGML)
37
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/WizardLM-30B-Uncensored)
38
+
39
+ ## Prompt template: WizardLM
40
+
41
+ ```
42
+ {prompt}
43
+ ### Response:
44
+ ```
45
+
46
+ ## Provided files
47
 
48
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
 
 
49
 
50
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
51
 
52
+ | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
53
+ | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
54
+ | main | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
55
+ | gptq-4bit-32g-actorder_True | 4 | 32 | True | 19.44 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
56
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 18.18 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
57
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 17.55 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
58
+ | gptq-8bit--1g-actorder_True | 8 | None | True | 32.99 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
59
+ | gptq-8bit-128g-actorder_False | 8 | 128 | False | 33.73 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
60
+ | gptq-3bit--1g-actorder_True | 3 | None | True | 12.92 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
61
+ | gptq-3bit-128g-actorder_False | 3 | 128 | False | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
62
+
63
+ ## How to download from branches
64
+
65
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/WizardLM-30B-uncensored-GPTQ:gptq-4bit-32g-actorder_True`
66
+ - With Git, you can clone a branch with:
67
+ ```
68
+ git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/WizardLM-30B-uncensored-GPTQ`
69
+ ```
70
+ - In Python Transformers code, the branch is the `revision` parameter; see below.
71
+
72
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
73
+
74
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
75
+
76
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
77
 
78
  1. Click the **Model tab**.
79
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-30B-uncensored-GPTQ`.
80
+ - To download from a specific branch, enter for example `TheBloke/WizardLM-30B-uncensored-GPTQ:gptq-4bit-32g-actorder_True`
81
+ - see Provided Files above for the list of branches for each option.
82
  3. Click **Download**.
83
+ 4. The model will start downloading. Once it's finished it will say "Done"
84
+ 5. In the top left, click the refresh icon next to **Model**.
85
+ 6. In the **Model** dropdown, choose the model you just downloaded: `WizardLM-30B-uncensored-GPTQ`
86
+ 7. The model will automatically load, and is now ready for use!
87
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
88
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
89
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
 
90
 
91
+ ## How to use this GPTQ model from Python code
92
+
93
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
94
+
95
+ `GITHUB_ACTIONS=true pip install auto-gptq`
96
 
97
+ Then try the following example code:
98
 
99
+ ```python
100
+ from transformers import AutoTokenizer, pipeline, logging
101
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
102
 
103
+ model_name_or_path = "TheBloke/WizardLM-30B-uncensored-GPTQ"
104
+ model_basename = "WizardLM-30B-Uncensored-GPTQ-4bit--1g.act.order"
105
 
106
+ use_triton = False
107
 
108
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
109
+
110
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
111
+ model_basename=model_basename
112
+ use_safetensors=True,
113
+ trust_remote_code=False,
114
+ device="cuda:0",
115
+ use_triton=use_triton,
116
+ quantize_config=None)
117
+
118
+ """
119
+ To download from a specific branch, use the revision parameter, as in this example:
120
+
121
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
122
+ revision="gptq-4bit-32g-actorder_True",
123
+ model_basename=model_basename,
124
+ use_safetensors=True,
125
+ trust_remote_code=False,
126
+ device="cuda:0",
127
+ quantize_config=None)
128
+ """
129
+
130
+ prompt = "Tell me about AI"
131
+ prompt_template=f'''{prompt}
132
+ ### Response:
133
+ '''
134
+
135
+ print("\n\n*** Generate:")
136
+
137
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
138
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
139
+ print(tokenizer.decode(output[0]))
140
+
141
+ # Inference can also be done using transformers' pipeline
142
+
143
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
144
+ logging.set_verbosity(logging.CRITICAL)
145
+
146
+ print("*** Pipeline:")
147
+ pipe = pipeline(
148
+ "text-generation",
149
+ model=model,
150
+ tokenizer=tokenizer,
151
+ max_new_tokens=512,
152
+ temperature=0.7,
153
+ top_p=0.95,
154
+ repetition_penalty=1.15
155
+ )
156
+
157
+ print(pipe(prompt_template)[0]['generated_text'])
158
+ ```
159
+
160
+ ## Compatibility
161
+
162
+ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
163
+
164
+ ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
165
 
166
  <!-- footer start -->
167
  ## Discord
168
 
169
  For further support, and discussions on these models and AI in general, join us at:
170
 
171
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
172
 
173
  ## Thanks, and how to contribute.
174
 
 
183
  * Patreon: https://patreon.com/TheBlokeAI
184
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
185
 
186
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
187
+
188
+ **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
189
 
190
  Thank you to all my generous patrons and donaters!
191
+
192
  <!-- footer end -->
193
+
194
+ # Original model card: Eric Hartford's WizardLM 30B Uncensored
195
 
196
  This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
197
 
198
  Shout out to the open source AI/ML community, and everyone who helped me out.
199
 
200
+ Note:
201
+ An uncensored model has no guardrails.
202
  You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
203
  Publishing anything this model generates is the same as publishing it yourself.
204
  You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.