Text Generation
Transformers
GGUF
English
llama
TheBloke commited on
Commit
ff52f6d
1 Parent(s): ab462bf

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -20
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  datasets:
3
  - conceptofmind/cot_submix_original
4
  - conceptofmind/flan2021_submix_original
@@ -9,10 +10,22 @@ language:
9
  - en
10
  license: llama2
11
  model_creator: Stability AI
12
- model_link: https://huggingface.co/stabilityai/StableBeluga-13B
13
  model_name: StableBeluga 13B
14
  model_type: llama
15
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  quantized_by: TheBloke
17
  ---
18
 
@@ -37,23 +50,25 @@ quantized_by: TheBloke
37
  - Model creator: [Stability AI](https://huggingface.co/stabilityai)
38
  - Original model: [StableBeluga 13B](https://huggingface.co/stabilityai/StableBeluga-13B)
39
 
 
40
  ## Description
41
 
42
  This repo contains GGUF format model files for [Stability AI's StableBeluga 13B](https://huggingface.co/stabilityai/StableBeluga-13B).
43
 
 
44
  <!-- README_GGUF.md-about-gguf start -->
45
  ### About GGUF
46
 
47
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
48
 
49
- The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
50
 
51
- Here are a list of clients and libraries that are known to support GGUF:
52
- * [llama.cpp](https://github.com/ggerganov/llama.cpp).
53
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions.
54
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with full GPU accel across multiple platforms and GPU architectures. Especially good for story telling.
55
- * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
56
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
 
57
  * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
58
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
59
  * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
@@ -62,9 +77,9 @@ Here are a list of clients and libraries that are known to support GGUF:
62
  <!-- repositories-available start -->
63
  ## Repositories available
64
 
 
65
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StableBeluga-13B-GPTQ)
66
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF)
67
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/StableBeluga-13B-GGML)
68
  * [Stability AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/stabilityai/StableBeluga-13B)
69
  <!-- repositories-available end -->
70
 
@@ -83,12 +98,14 @@ Here are a list of clients and libraries that are known to support GGUF:
83
  ```
84
 
85
  <!-- prompt-template end -->
 
 
86
  <!-- compatibility_gguf start -->
87
  ## Compatibility
88
 
89
- These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
90
 
91
- They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
92
 
93
  ## Explanation of quantisation methods
94
  <details>
@@ -129,21 +146,75 @@ Refer to the Provided Files table below to see what files use which methods, and
129
 
130
  <!-- README_GGUF.md-provided-files end -->
131
 
132
- <!-- README_GGUF.md-how-to-run start -->
133
- ## Example `llama.cpp` command
 
 
 
 
 
 
 
 
 
 
 
134
 
135
- Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
136
 
137
- For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
138
 
 
 
 
 
139
  ```
140
- ./main -t 10 -ngl 32 -m stablebeluga-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
141
  ```
142
- Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
143
 
144
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
145
 
146
- Change `-c 4096` to the desired sequence length for this model. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
147
 
148
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
149
 
@@ -200,10 +271,12 @@ For further support, and discussions on these models and AI in general, join us
200
 
201
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
202
 
203
- ## Thanks, and how to contribute.
204
 
205
  Thanks to the [chirper.ai](https://chirper.ai) team!
206
 
 
 
207
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
208
 
209
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
@@ -215,7 +288,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
215
 
216
  **Special thanks to**: Aemon Algiz.
217
 
218
- **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
219
 
220
 
221
  Thank you to all my generous patrons and donaters!
 
1
  ---
2
+ base_model: https://huggingface.co/stabilityai/StableBeluga-13B
3
  datasets:
4
  - conceptofmind/cot_submix_original
5
  - conceptofmind/flan2021_submix_original
 
10
  - en
11
  license: llama2
12
  model_creator: Stability AI
 
13
  model_name: StableBeluga 13B
14
  model_type: llama
15
  pipeline_tag: text-generation
16
+ prompt_template: '### System:
17
+
18
+ {system_message}
19
+
20
+
21
+ ### User:
22
+
23
+ {prompt}
24
+
25
+
26
+ ### Assistant:
27
+
28
+ '
29
  quantized_by: TheBloke
30
  ---
31
 
 
50
  - Model creator: [Stability AI](https://huggingface.co/stabilityai)
51
  - Original model: [StableBeluga 13B](https://huggingface.co/stabilityai/StableBeluga-13B)
52
 
53
+ <!-- description start -->
54
  ## Description
55
 
56
  This repo contains GGUF format model files for [Stability AI's StableBeluga 13B](https://huggingface.co/stabilityai/StableBeluga-13B).
57
 
58
+ <!-- description end -->
59
  <!-- README_GGUF.md-about-gguf start -->
60
  ### About GGUF
61
 
62
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible.
63
 
64
+ Here is an incomplate list of clients and libraries that are known to support GGUF:
65
 
66
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
67
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
68
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
69
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
 
70
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
71
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
72
  * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
73
  * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
74
  * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
 
77
  <!-- repositories-available start -->
78
  ## Repositories available
79
 
80
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/StableBeluga-13B-AWQ)
81
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StableBeluga-13B-GPTQ)
82
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/StableBeluga-13B-GGUF)
 
83
  * [Stability AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/stabilityai/StableBeluga-13B)
84
  <!-- repositories-available end -->
85
 
 
98
  ```
99
 
100
  <!-- prompt-template end -->
101
+
102
+
103
  <!-- compatibility_gguf start -->
104
  ## Compatibility
105
 
106
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
107
 
108
+ They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
109
 
110
  ## Explanation of quantisation methods
111
  <details>
 
146
 
147
  <!-- README_GGUF.md-provided-files end -->
148
 
149
+ <!-- README_GGUF.md-how-to-download start -->
150
+ ## How to download GGUF files
151
+
152
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
153
+
154
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
155
+ - LM Studio
156
+ - LoLLMS Web UI
157
+ - Faraday.dev
158
+
159
+ ### In `text-generation-webui`
160
+
161
+ Under Download Model, you can enter the model repo: TheBloke/StableBeluga-13B-GGUF and below it, a specific filename to download, such as: stablebeluga-13b.q4_K_M.gguf.
162
 
163
+ Then click Download.
164
 
165
+ ### On the command line, including multiple files at once
166
 
167
+ I recommend using the `huggingface-hub` Python library:
168
+
169
+ ```shell
170
+ pip3 install huggingface-hub>=0.17.1
171
  ```
172
+
173
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
174
+
175
+ ```shell
176
+ huggingface-cli download TheBloke/StableBeluga-13B-GGUF stablebeluga-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
177
+ ```
178
+
179
+ <details>
180
+ <summary>More advanced huggingface-cli download usage</summary>
181
+
182
+ You can also download multiple files at once with a pattern:
183
+
184
+ ```shell
185
+ huggingface-cli download TheBloke/StableBeluga-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
186
+ ```
187
+
188
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
189
+
190
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
191
+
192
+ ```shell
193
+ pip3 install hf_transfer
194
+ ```
195
+
196
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
197
+
198
+ ```shell
199
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/StableBeluga-13B-GGUF stablebeluga-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
200
+ ```
201
+
202
+ Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
203
+ </details>
204
+ <!-- README_GGUF.md-how-to-download end -->
205
+
206
+ <!-- README_GGUF.md-how-to-run start -->
207
+ ## Example `llama.cpp` command
208
+
209
+ Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
210
+
211
+ ```shell
212
+ ./main -ngl 32 -m stablebeluga-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\n{system_message}\n\n### User:\n{prompt}\n\n### Assistant:"
213
  ```
 
214
 
215
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
216
 
217
+ Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
218
 
219
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
220
 
 
271
 
272
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
273
 
274
+ ## Thanks, and how to contribute
275
 
276
  Thanks to the [chirper.ai](https://chirper.ai) team!
277
 
278
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
279
+
280
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
281
 
282
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
 
288
 
289
  **Special thanks to**: Aemon Algiz.
290
 
291
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
292
 
293
 
294
  Thank you to all my generous patrons and donaters!