TheBloke commited on
Commit
ff54e19
1 Parent(s): 445671d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: bigcode-openrail-m
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # WizardLM's WizardCoder 15B 1.0 GPTQ
21
+
22
+ These files are GPTQ 4bit model files for [WizardLM's WizardCoder 15B 1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0).
23
+
24
+ It is the result of quantising to 4bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
25
+
26
+ ## Repositories available
27
+
28
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GPTQ)
29
+ * [4, 5, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardCoder-15B-1.0-GGML)
30
+ * [WizardLM's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
31
+
32
+ ## How to easily download and use this model in text-generation-webui
33
+
34
+ Please make sure you're using the latest version of text-generation-webui
35
+
36
+ 1. Click the **Model tab**.
37
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardCoder-15B-1.0-GPTQ`.
38
+ 3. Click **Download**.
39
+ 4. The model will start downloading. Once it's finished it will say "Done"
40
+ 5. In the top left, click the refresh icon next to **Model**.
41
+ 6. In the **Model** dropdown, choose the model you just downloaded: `WizardCoder-15B-1.0-GPTQ`
42
+ 7. The model will automatically load, and is now ready for use!
43
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
44
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
45
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
46
+
47
+ ## How to use this GPTQ model from Python code
48
+
49
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
50
+
51
+ `pip install auto-gptq`
52
+
53
+ Then try the following example code:
54
+
55
+ ```python
56
+ from transformers import AutoTokenizer, pipeline, logging
57
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
58
+ import argparse
59
+
60
+ model_name_or_path = "TheBloke/WizardCoder-15B-1.0-GPTQ"
61
+ # Or to load it locally, pass the local download path
62
+ # model_name_or_path = "/path/to/models/TheBloke_WizardCoder-15B-1.0-GPTQ"
63
+
64
+ use_triton = False
65
+
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
67
+
68
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
69
+ use_safetensors=True,
70
+ device="cuda:0",
71
+ use_triton=use_triton,
72
+ quantize_config=None)
73
+
74
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
75
+ logging.set_verbosity(logging.CRITICAL)
76
+
77
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
78
+
79
+ prompt_template = "<|system|>\n<|end|>\n<|user|>\n{query}<|end|>\n<|assistant|>"
80
+ prompt = prompt_template.format(query="How do I sort a list in Python?")
81
+ # We use a special <|end|> token with ID 49155 to denote ends of a turn
82
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.2, top_k=50, top_p=0.95, eos_token_id=49155)
83
+ # You can sort a list in Python by using the sort() method. Here's an example:\n\n```\nnumbers = [3, 1, 4, 1, 5, 9, 2, 6, 5, 3, 5]\nnumbers.sort()\nprint(numbers)\n```\n\nThis will sort the list in place and print the sorted list.
84
+ print(outputs[0]['generated_text'])
85
+ ```
86
+
87
+ ## Provided files
88
+
89
+ **gptq_model-4bit--1g.safetensors**
90
+
91
+ This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
92
+
93
+ It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
94
+
95
+ * `gptq_model-4bit--1g.safetensors`
96
+ * Works with AutoGPTQ in CUDA or Triton modes.
97
+ * Works with text-generation-webui, including one-click-installers.
98
+ * Does not work with GPTQ-for-LLaMa.
99
+ * Parameters: Groupsize = -1. Act Order / desc_act = True.
100
+
101
+ <!-- footer start -->
102
+ ## Discord
103
+
104
+ For further support, and discussions on these models and AI in general, join us at:
105
+
106
+ [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
107
+
108
+ ## Thanks, and how to contribute.
109
+
110
+ Thanks to the [chirper.ai](https://chirper.ai) team!
111
+
112
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
113
+
114
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
115
+
116
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
117
+
118
+ * Patreon: https://patreon.com/TheBlokeAI
119
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
120
+
121
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
122
+
123
+ **Patreon special mentions**: Ajan Kanaga, Kalila, Derek Yates, Sean Connelly, Luke, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, trip7s trip, Jonathan Leane, Talal Aujan, Artur Olbinski, Cory Kujawski, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Johann-Peter Hartmann.
124
+
125
+ Thank you to all my generous patrons and donaters!
126
+
127
+ <!-- footer end -->
128
+
129
+ # Original model card: WizardLM's WizardCoder 15B 1.0
130
+
131
+ This is the Full-Weight of WizardCoder.
132
+
133
+ Repository: https://github.com/nlpxucan/WizardLM
134
+ Paper:
135
+