helizac commited on
Commit
0e9b1d0
·
verified ·
1 Parent(s): 63abce8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -48,7 +48,7 @@ Here is an incomplete list of clients and libraries that are known to support GG
48
  <!-- README_GGUF.md-about-gguf end -->
49
 
50
  <!-- prompt-template start -->
51
- ## Prompt template: ChatML
52
 
53
  ```
54
  [S2S]prompt<EOS>
@@ -69,8 +69,6 @@ Those models are quantized by candle, cargo using Rust and Python.
69
  <!-- README_GGUF.md-provided-files start -->
70
  ## Provided files
71
 
72
- Sure, here's the updated table with comments and the swapped values for Quant Method and Bit:
73
-
74
  | Name | Bit | Quant Method | Size | Use case |
75
  | ---- | ---- | ---- | ---- | ---- |
76
  | [TURNA_Q2K.gguf](https://huggingface.co/helizac/TURNA_GGUF/blob/main/TURNA_Q2K.gguf) | 2 | Q2K | 0.36 GB | Smallest size, lowest precision |
@@ -130,7 +128,7 @@ For more documentation on downloading with `huggingface-cli`, please see: [HF ->
130
  <!-- README_GGUF.md-how-to-download end -->
131
 
132
  <!-- README_GGUF.md-how-to-run start -->
133
- ## Example `colab` usage
134
 
135
  ```shell
136
  %%shell
 
48
  <!-- README_GGUF.md-about-gguf end -->
49
 
50
  <!-- prompt-template start -->
51
+ ## Prompt template
52
 
53
  ```
54
  [S2S]prompt<EOS>
 
69
  <!-- README_GGUF.md-provided-files start -->
70
  ## Provided files
71
 
 
 
72
  | Name | Bit | Quant Method | Size | Use case |
73
  | ---- | ---- | ---- | ---- | ---- |
74
  | [TURNA_Q2K.gguf](https://huggingface.co/helizac/TURNA_GGUF/blob/main/TURNA_Q2K.gguf) | 2 | Q2K | 0.36 GB | Smallest size, lowest precision |
 
128
  <!-- README_GGUF.md-how-to-download end -->
129
 
130
  <!-- README_GGUF.md-how-to-run start -->
131
+ # Example `colab` usage
132
 
133
  ```shell
134
  %%shell