Update README.md
Browse files
README.md
CHANGED
@@ -21,10 +21,6 @@ These files are GPTQ 4bit model files for [WizardLM 13B 1.0](https://huggingface
|
|
21 |
|
22 |
It is the result of merging the LoRA then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
23 |
|
24 |
-
## Need support? Want to discuss? I now have a Discord!
|
25 |
-
|
26 |
-
Join me at: https://discord.gg/UBgz4VXf
|
27 |
-
|
28 |
## Other repositories available
|
29 |
|
30 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GPTQ)
|
|
|
21 |
|
22 |
It is the result of merging the LoRA then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
|
23 |
|
|
|
|
|
|
|
|
|
24 |
## Other repositories available
|
25 |
|
26 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-13B-1.0-GPTQ)
|