TheBloke commited on
Commit
23fe8a5
1 Parent(s): 941b2f0

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -28
README.md CHANGED
@@ -43,7 +43,7 @@ This repo contains GGUF format model files for [WizardLM's WizardLM 13B 1.0](htt
43
  <!-- README_GGUF.md-about-gguf start -->
44
  ### About GGUF
45
 
46
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
47
 
48
  Here is an incomplate list of clients and libraries that are known to support GGUF:
49
 
@@ -76,19 +76,12 @@ A chat between a curious user and an artificial intelligence assistant. The assi
76
  ```
77
 
78
  <!-- prompt-template end -->
79
- <!-- licensing start -->
80
- ## Licensing
81
 
82
- The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license.
83
 
84
- As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
85
-
86
- In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [WizardLM's WizardLM 13B 1.0](https://huggingface.co/WizardLM/WizardLM-13B-V1.0).
87
- <!-- licensing end -->
88
  <!-- compatibility_gguf start -->
89
  ## Compatibility
90
 
91
- These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
92
 
93
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
94
 
@@ -143,7 +136,7 @@ The following clients/libraries will automatically download models for you, prov
143
 
144
  ### In `text-generation-webui`
145
 
146
- Under Download Model, you can enter the model repo: TheBloke/WizardLM-13B-1.0-GGUF and below it, a specific filename to download, such as: wizardLM-13B-1.0.q4_K_M.gguf.
147
 
148
  Then click Download.
149
 
@@ -152,13 +145,13 @@ Then click Download.
152
  I recommend using the `huggingface-hub` Python library:
153
 
154
  ```shell
155
- pip3 install huggingface-hub>=0.17.1
156
  ```
157
 
158
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
159
 
160
  ```shell
161
- huggingface-cli download TheBloke/WizardLM-13B-1.0-GGUF wizardLM-13B-1.0.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
162
  ```
163
 
164
  <details>
@@ -181,25 +174,25 @@ pip3 install hf_transfer
181
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
182
 
183
  ```shell
184
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-13B-1.0-GGUF wizardLM-13B-1.0.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
185
  ```
186
 
187
- Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
188
  </details>
189
  <!-- README_GGUF.md-how-to-download end -->
190
 
191
  <!-- README_GGUF.md-how-to-run start -->
192
  ## Example `llama.cpp` command
193
 
194
- Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
195
 
196
  ```shell
197
- ./main -ngl 32 -m wizardLM-13B-1.0.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
198
  ```
199
 
200
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
201
 
202
- Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
203
 
204
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
205
 
@@ -213,35 +206,37 @@ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://git
213
 
214
  You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
215
 
216
- ### How to load this model from Python using ctransformers
217
 
218
  #### First install the package
219
 
220
- ```bash
 
 
221
  # Base ctransformers with no GPU acceleration
222
- pip install ctransformers>=0.2.24
223
  # Or with CUDA GPU acceleration
224
- pip install ctransformers[cuda]>=0.2.24
225
- # Or with ROCm GPU acceleration
226
- CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
227
- # Or with Metal GPU acceleration for macOS systems
228
- CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
229
  ```
230
 
231
- #### Simple example code to load one of these GGUF models
232
 
233
  ```python
234
  from ctransformers import AutoModelForCausalLM
235
 
236
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
237
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-13B-1.0-GGUF", model_file="wizardLM-13B-1.0.q4_K_M.gguf", model_type="llama", gpu_layers=50)
238
 
239
  print(llm("AI is going to"))
240
  ```
241
 
242
  ## How to use with LangChain
243
 
244
- Here's guides on using llama-cpp-python or ctransformers with LangChain:
245
 
246
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
247
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
 
43
  <!-- README_GGUF.md-about-gguf start -->
44
  ### About GGUF
45
 
46
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
47
 
48
  Here is an incomplate list of clients and libraries that are known to support GGUF:
49
 
 
76
  ```
77
 
78
  <!-- prompt-template end -->
 
 
79
 
 
80
 
 
 
 
 
81
  <!-- compatibility_gguf start -->
82
  ## Compatibility
83
 
84
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
85
 
86
  They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
87
 
 
136
 
137
  ### In `text-generation-webui`
138
 
139
+ Under Download Model, you can enter the model repo: TheBloke/WizardLM-13B-1.0-GGUF and below it, a specific filename to download, such as: wizardLM-13B-1.0.Q4_K_M.gguf.
140
 
141
  Then click Download.
142
 
 
145
  I recommend using the `huggingface-hub` Python library:
146
 
147
  ```shell
148
+ pip3 install huggingface-hub
149
  ```
150
 
151
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
152
 
153
  ```shell
154
+ huggingface-cli download TheBloke/WizardLM-13B-1.0-GGUF wizardLM-13B-1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
155
  ```
156
 
157
  <details>
 
174
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
175
 
176
  ```shell
177
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/WizardLM-13B-1.0-GGUF wizardLM-13B-1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
178
  ```
179
 
180
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
181
  </details>
182
  <!-- README_GGUF.md-how-to-download end -->
183
 
184
  <!-- README_GGUF.md-how-to-run start -->
185
  ## Example `llama.cpp` command
186
 
187
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
188
 
189
  ```shell
190
+ ./main -ngl 32 -m wizardLM-13B-1.0.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
191
  ```
192
 
193
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
194
 
195
+ Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
196
 
197
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
198
 
 
206
 
207
  You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
208
 
209
+ ### How to load this model in Python code, using ctransformers
210
 
211
  #### First install the package
212
 
213
+ Run one of the following commands, according to your system:
214
+
215
+ ```shell
216
  # Base ctransformers with no GPU acceleration
217
+ pip install ctransformers
218
  # Or with CUDA GPU acceleration
219
+ pip install ctransformers[cuda]
220
+ # Or with AMD ROCm GPU acceleration (Linux only)
221
+ CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
222
+ # Or with Metal GPU acceleration for macOS systems only
223
+ CT_METAL=1 pip install ctransformers --no-binary ctransformers
224
  ```
225
 
226
+ #### Simple ctransformers example code
227
 
228
  ```python
229
  from ctransformers import AutoModelForCausalLM
230
 
231
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
232
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/WizardLM-13B-1.0-GGUF", model_file="wizardLM-13B-1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
233
 
234
  print(llm("AI is going to"))
235
  ```
236
 
237
  ## How to use with LangChain
238
 
239
+ Here are guides on using llama-cpp-python and ctransformers with LangChain:
240
 
241
  * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
242
  * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)