grimulkan commited on
Commit
c9111dd
1 Parent(s): 1feecb7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -8,8 +8,6 @@ There is no additional fine-tuning. The resulting model seems to not be broken..
8
 
9
  [ChuckMcSneed](https://huggingface.co/ChuckMcSneed) did a benchmark [here](https://huggingface.co/grimulkan/Goliath-longLORA-120b-rope8-32k-fp16/discussions/1), indicating 30% degradation with 8x the context length.
10
 
11
- A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/Goliath-longLORA-120b-rope8-2k-6bpw_h8_exl2).
12
-
13
- More EXL2 quants [here](https://huggingface.co/aikitoria/Goliath-longLORA-120b-rope8-32k-exl2), thanks to aikitoria.
14
 
15
  See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how the original 70B merges were created with longLORA.
 
8
 
9
  [ChuckMcSneed](https://huggingface.co/ChuckMcSneed) did a benchmark [here](https://huggingface.co/grimulkan/Goliath-longLORA-120b-rope8-32k-fp16/discussions/1), indicating 30% degradation with 8x the context length.
10
 
11
+ A 6-bit EXL2 quantization is available [here](https://huggingface.co/grimulkan/Goliath-longLORA-120b-rope8-2k-6bpw_h8_exl2). More EXL2 quants [here](https://huggingface.co/aikitoria/Goliath-longLORA-120b-rope8-32k-exl2), thanks to aikitoria.
 
 
12
 
13
  See [this discussion](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16/discussions/2) for how the original 70B merges were created with longLORA.