Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
## Description
|
2 |
|
3 |
This is a GPTQ 4-bit quantized version of [Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1)
|
4 |
-
This was quantized using for `
|
5 |
|
6 |
This is my first quant, so I could have messed up somewhere. However, I did some testing and it looks like it's working well.
|
7 |
by mikudev
|
|
|
1 |
## Description
|
2 |
|
3 |
This is a GPTQ 4-bit quantized version of [Llama-3-Lumimaid-8B-v0.1](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1)
|
4 |
+
This was quantized using for `8192 seqlen` using the [AutoGPTQ wikitext2 example](https://github.com/AutoGPTQ/AutoGPTQ/blob/main/examples/quantization/basic_usage_wikitext2.py)
|
5 |
|
6 |
This is my first quant, so I could have messed up somewhere. However, I did some testing and it looks like it's working well.
|
7 |
by mikudev
|