Transformers
mpt
Composer
MosaicML
llm-foundry
TheBloke commited on
Commit
4bb6ada
1 Parent(s): ebfe7f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -51
README.md CHANGED
@@ -1,6 +1,27 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  inference: false
3
- license: other
4
  ---
5
 
6
  <!-- header start -->
@@ -17,81 +38,69 @@ license: other
17
  </div>
18
  <!-- header end -->
19
 
20
- # MosaicML's MPT-30B-chat GGML
21
 
22
- These files are GGML format model files for [MosaicML's MPT-30B-chat](https://huggingface.co/mosaicml/mpt-30b-chat).
23
 
24
- GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
25
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
26
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp)
27
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
28
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
29
- * [ctransformers](https://github.com/marella/ctransformers)
30
 
31
  ## Repositories available
32
 
33
- * [4-bit GPTQ models for GPU inference](https://huggingface.co/none)
34
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/mpt-30B-chat-GGML)
35
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mosaicml/mpt-30b-chat)
36
 
37
- <!-- compatibility_ggml start -->
38
- ## Compatibility
 
 
 
 
39
 
40
- ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
41
 
42
- I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
43
 
44
- These are guaranteed to be compatbile with any UIs, tools and libraries released since late May.
 
45
 
46
- ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
47
 
48
- These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
49
 
50
- They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python and ctransformers. Other tools and libraries may or may not be compatible - check their documentation if in doubt.
 
 
 
 
 
 
 
 
 
 
51
 
52
- ## Explanation of the new k-quant methods
53
 
54
- The new methods available are:
55
- * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
56
- * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
57
- * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
58
- * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
59
- * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
60
- * GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
61
 
62
- Refer to the Provided Files table below to see what files use which methods, and how.
63
  <!-- compatibility_ggml end -->
64
 
65
  ## Provided files
66
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
67
  | ---- | ---- | ---- | ---- | ---- | ----- |
68
- | mpt-30b-chat.ggmlv0.q4_0.bin | q4_0 | 4 | 16.85 GB | 19.35 GB | Original llama.cpp quant method, 4-bit. |
69
- | mpt-30b-chat.ggmlv0.q4_1.bin | q4_1 | 4 | 18.73 GB | 21.23 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
70
- | mpt-30b-chat.ggmlv0.q5_0.bin | q5_0 | 5 | 20.60 GB | 23.10 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
71
- | mpt-30b-chat.ggmlv0.q5_1.bin | q5_1 | 5 | 22.47 GB | 24.97 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
72
- | mpt-30b-chat.ggmlv0.q8_0.bin | q8_0 | 8 | 31.83 GB | 34.33 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
73
 
74
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
75
 
76
- ## How to run in `llama.cpp`
77
-
78
- I use the following command line; adjust for your tastes and needs:
79
-
80
- ```
81
- ./main -t 10 -ngl 32 -m mpt-30b-chat.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
82
- ```
83
- If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
84
-
85
- If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
86
-
87
- Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
88
-
89
- If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
90
-
91
- ## How to run in `text-generation-webui`
92
-
93
- Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
94
-
95
  <!-- footer start -->
96
  ## Discord
97
 
 
1
  ---
2
+ license: cc-by-nc-sa-4.0
3
+ datasets:
4
+ - camel-ai/code
5
+ - ehartford/wizard_vicuna_70k_unfiltered
6
+ - anon8231489123/ShareGPT_Vicuna_unfiltered
7
+ - teknium1/GPTeacher/roleplay-instruct-v2-final
8
+ - teknium1/GPTeacher/codegen-isntruct
9
+ - timdettmers/openassistant-guanaco
10
+ - camel-ai/math
11
+ - project-baize/baize-chatbot/medical_chat_data
12
+ - project-baize/baize-chatbot/quora_chat_data
13
+ - project-baize/baize-chatbot/stackoverflow_chat_data
14
+ - camel-ai/biology
15
+ - camel-ai/chemistry
16
+ - camel-ai/ai_society
17
+ - jondurbin/airoboros-gpt4-1.2
18
+ - LongConversations
19
+ - camel-ai/physics
20
+ tags:
21
+ - Composer
22
+ - MosaicML
23
+ - llm-foundry
24
  inference: false
 
25
  ---
26
 
27
  <!-- header start -->
 
38
  </div>
39
  <!-- header end -->
40
 
41
+ # MosaicML's MPT-30B-Chat GGML
42
 
43
+ These files are GGML format model files for [MosaicML's MPT-30B-Chat](https://huggingface.co/mosaicml/mpt-30b-chat).
44
 
45
+ Please note that these GGMLs are **not compatible with llama.cpp, or currently with text-generation-webui**. Please see below for a list of tools known to work with these model files.
46
+
47
+ [KoboldCpp](https://github.com/LostRuins/koboldcpp_ just added GPU accelerated (OpenCL) support for MPT models, so that is the client I recommend using for these models.
 
 
 
48
 
49
  ## Repositories available
50
 
 
51
  * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/mpt-30B-chat-GGML)
52
  * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mosaicml/mpt-30b-chat)
53
 
54
+ ## Prompt template
55
+
56
+ Just type the prompt!
57
+ ```
58
+ prompt
59
+ ```
60
 
61
+ ## A note regarding context length: 8K
62
 
63
+ The base model has an 8K context length. It is not yet confirmed if the 8K context of this model works with the quantised files.
64
 
65
+ If it does, [KoboldCpp](https://github.com/LostRuins/koboldcpp) supports 8K context if you manually it to 8K by adjusting the text box above the slider:
66
+ ![.](https://i.imgur.com/tEbpeJq.png)
67
 
68
+ It is currently unknown as to whether it is compatible with other clients.
69
 
70
+ If you have feedback on this, please let me know.
71
 
72
+ <!-- compatibility_ggml start -->
73
+ ## Compatibilty
74
+
75
+ These files are **not** compatible with text-generation-webui, llama.cpp, or llama-cpp-python.
76
+
77
+ Currently they can be used with:
78
+ * KoboldCpp, a powerful inference engine based on llama.cpp, with good UI and GPU accelerated support for MPT models: [KoboldCpp](https://github.com/LostRuins/koboldcpp)
79
+ * The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers)
80
+ * The LoLLMS Web UI which uses ctransformers: [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
81
+ * [rustformers' llm](https://github.com/rustformers/llm)
82
+ * The example `mpt` binary provided with [ggml](https://github.com/ggerganov/ggml)
83
 
84
+ As other options become available I will endeavour to update them here (do let me know in the Community tab if I've missed something!)
85
 
86
+ ## Tutorial for using LoLLMS Web UI
87
+
88
+ * [Text tutorial, written by **Lucas3DCG**](https://huggingface.co/TheBloke/MPT-7B-Storywriter-GGML/discussions/2#6475d914e9b57ce0caa68888)
89
+ * [Video tutorial, by LoLLMS Web UI's author **ParisNeo**](https://www.youtube.com/watch?v=ds_U0TDzbzI)
 
 
 
90
 
 
91
  <!-- compatibility_ggml end -->
92
 
93
  ## Provided files
94
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
95
  | ---- | ---- | ---- | ---- | ---- | ----- |
96
+ | mpt-30b-chat.ggmlv0.q4_0.bin | q4_0 | 4 | 16.85 GB | 19.35 GB | 4-bit. |
97
+ | mpt-30b-chat.ggmlv0.q4_1.bin | q4_1 | 4 | 18.73 GB | 21.23 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
98
+ | mpt-30b-chat.ggmlv0.q5_0.bin | q5_0 | 5 | 20.60 GB | 23.10 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
99
+ | mpt-30b-chat.ggmlv0.q5_1.bin | q5_1 | 5 | 22.47 GB | 24.97 GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
100
+ | mpt-30b-chat.ggmlv0.q8_0.bin | q8_0 | 8 | 31.83 GB | 34.33 GB | 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
101
 
102
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
103
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
  <!-- footer start -->
105
  ## Discord
106