morriszms commited on
Commit
b083638
·
verified ·
1 Parent(s): 08c451e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -12
README.md CHANGED
@@ -222,8 +222,16 @@ This repo contains GGUF format model files for [Artples/L-MChat-7b](https://hugg
222
 
223
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
224
 
 
 
 
 
 
 
 
225
  ## Prompt template
226
 
 
227
  ```
228
  <s>GPT4 Correct System: {system_prompt}<|end_of_turn|>GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
229
  ```
@@ -232,18 +240,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
232
 
233
  | Filename | Quant type | File Size | Description |
234
  | -------- | ---------- | --------- | ----------- |
235
- | [L-MChat-7b-Q2_K.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
236
- | [L-MChat-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q3_K_S.gguf) | Q3_K_S | 2.947 GB | very small, high quality loss |
237
- | [L-MChat-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q3_K_M.gguf) | Q3_K_M | 3.277 GB | very small, high quality loss |
238
- | [L-MChat-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q3_K_L.gguf) | Q3_K_L | 3.560 GB | small, substantial quality loss |
239
- | [L-MChat-7b-Q4_0.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
240
- | [L-MChat-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q4_K_S.gguf) | Q4_K_S | 3.856 GB | small, greater quality loss |
241
- | [L-MChat-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q4_K_M.gguf) | Q4_K_M | 4.068 GB | medium, balanced quality - recommended |
242
- | [L-MChat-7b-Q5_0.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q5_0.gguf) | Q5_0 | 4.654 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
243
- | [L-MChat-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q5_K_S.gguf) | Q5_K_S | 4.654 GB | large, low quality loss - recommended |
244
- | [L-MChat-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q5_K_M.gguf) | Q5_K_M | 4.779 GB | large, very low quality loss - recommended |
245
- | [L-MChat-7b-Q6_K.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q6_K.gguf) | Q6_K | 5.534 GB | very large, extremely low quality loss |
246
- | [L-MChat-7b-Q8_0.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/tree/main/L-MChat-7b-Q8_0.gguf) | Q8_0 | 7.167 GB | very large, extremely low quality loss - not recommended |
247
 
248
 
249
  ## Downloading instruction
 
222
 
223
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
224
 
225
+
226
+ <div style="text-align: left; margin: 20px 0;">
227
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
228
+ Run them on the TensorBlock client using your local machine ↗
229
+ </a>
230
+ </div>
231
+
232
  ## Prompt template
233
 
234
+
235
  ```
236
  <s>GPT4 Correct System: {system_prompt}<|end_of_turn|>GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
237
  ```
 
240
 
241
  | Filename | Quant type | File Size | Description |
242
  | -------- | ---------- | --------- | ----------- |
243
+ | [L-MChat-7b-Q2_K.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q2_K.gguf) | Q2_K | 2.533 GB | smallest, significant quality loss - not recommended for most purposes |
244
+ | [L-MChat-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q3_K_S.gguf) | Q3_K_S | 2.947 GB | very small, high quality loss |
245
+ | [L-MChat-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q3_K_M.gguf) | Q3_K_M | 3.277 GB | very small, high quality loss |
246
+ | [L-MChat-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q3_K_L.gguf) | Q3_K_L | 3.560 GB | small, substantial quality loss |
247
+ | [L-MChat-7b-Q4_0.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
248
+ | [L-MChat-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q4_K_S.gguf) | Q4_K_S | 3.856 GB | small, greater quality loss |
249
+ | [L-MChat-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q4_K_M.gguf) | Q4_K_M | 4.068 GB | medium, balanced quality - recommended |
250
+ | [L-MChat-7b-Q5_0.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q5_0.gguf) | Q5_0 | 4.654 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
251
+ | [L-MChat-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q5_K_S.gguf) | Q5_K_S | 4.654 GB | large, low quality loss - recommended |
252
+ | [L-MChat-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q5_K_M.gguf) | Q5_K_M | 4.779 GB | large, very low quality loss - recommended |
253
+ | [L-MChat-7b-Q6_K.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q6_K.gguf) | Q6_K | 5.534 GB | very large, extremely low quality loss |
254
+ | [L-MChat-7b-Q8_0.gguf](https://huggingface.co/tensorblock/L-MChat-7b-GGUF/blob/main/L-MChat-7b-Q8_0.gguf) | Q8_0 | 7.167 GB | very large, extremely low quality loss - not recommended |
255
 
256
 
257
  ## Downloading instruction