OwenArli commited on
Commit
fe96834
1 Parent(s): 427a2a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -32,7 +32,7 @@ Let us know what you think of the model! The 8B and 12B versions of RPMax had gr
32
 
33
  The model is available in quantized formats:
34
 
35
- We recommend using full weights or GPTQ as GGUF seems to generate gibberish at low quants.
36
 
37
  * **FP16**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1
38
  * **GPTQ_Q4**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1-GPTQ_Q4
 
32
 
33
  The model is available in quantized formats:
34
 
35
+ We recommend using full weights or GPTQ. GGUF provided by https://huggingface.co/mradermacher/Llama-3.1-70B-ArliAI-RPMax-v1.1-GGUF
36
 
37
  * **FP16**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1
38
  * **GPTQ_Q4**: https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1-GPTQ_Q4