Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ base_model:
|
|
14 |
|
15 |
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.
|
16 |
|
17 |
-
Qwen3 - 8B set at 320k (
|
18 |
|
19 |
This is a collection of models of Qwen 3 8Bs with max context set at 64k, 96k, 128k, 192k, 256k, and 320k.
|
20 |
|
@@ -65,7 +65,7 @@ You can use GGUF-MY-REPO and build standard quants without imatrix using this re
|
|
65 |
|
66 |
<B>General Notes:</b>
|
67 |
|
68 |
-
Max context on this version is : 320k (
|
69 |
|
70 |
Use Jinja Template or CHATML template.
|
71 |
|
|
|
14 |
|
15 |
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.
|
16 |
|
17 |
+
Qwen3 - 8B set at 320k (327680) context by extended YARN.
|
18 |
|
19 |
This is a collection of models of Qwen 3 8Bs with max context set at 64k, 96k, 128k, 192k, 256k, and 320k.
|
20 |
|
|
|
65 |
|
66 |
<B>General Notes:</b>
|
67 |
|
68 |
+
Max context on this version is : 320k (327680)
|
69 |
|
70 |
Use Jinja Template or CHATML template.
|
71 |
|