Update README.md
Browse files
README.md
CHANGED
@@ -22,11 +22,11 @@ These files are GPTQ 4bit model files for [Tim Dettmers' Guanaco 33B](https://hu
|
|
22 |
|
23 |
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
24 |
|
25 |
-
##
|
26 |
|
27 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-33B-GPTQ)
|
28 |
-
* [4
|
29 |
-
* [
|
30 |
|
31 |
## Prompt template
|
32 |
|
|
|
22 |
|
23 |
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
24 |
|
25 |
+
## Repositories available
|
26 |
|
27 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-33B-GPTQ)
|
28 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/guanaco-33B-GGML)
|
29 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/timdettmers/guanaco-33b-merged)
|
30 |
|
31 |
## Prompt template
|
32 |
|