Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,6 @@ There is no additional fine-tuning. The resulting model seems to not be broken..
|
|
11 |
|
12 |
You could also try merging this with other models of longLORA descendency (like [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)).
|
13 |
|
14 |
-
A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/lzlv-longLORA-70b-rope8-32k-6bpw-h8-exl2).
|
15 |
|
16 |
See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how to create merges like these.
|
|
|
11 |
|
12 |
You could also try merging this with other models of longLORA descendency (like [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)).
|
13 |
|
14 |
+
A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/lzlv-longLORA-70b-rope8-32k-6bpw-h8-exl2), and 4 -bit EXL2 [here](https://huggingface.co/grimulkan/lzlv-longLORA-70b-rope8-32k-4bpw-h6-exl2).
|
15 |
|
16 |
See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how to create merges like these.
|