Update README.md
Browse files
README.md
CHANGED
@@ -15,6 +15,19 @@ tags:
|
|
15 |
This model was converted to GGUF format from [`DavidAU/Qwen3-8B-96k-Context-3X-Medium-Plus`](https://huggingface.co/DavidAU/Qwen3-8B-96k-Context-3X-Medium-Plus) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
16 |
Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-8B-96k-Context-3X-Medium-Plus) for more details on the model.
|
17 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
## Use with llama.cpp
|
19 |
Install llama.cpp through brew (works on Mac and Linux)
|
20 |
|
|
|
15 |
This model was converted to GGUF format from [`DavidAU/Qwen3-8B-96k-Context-3X-Medium-Plus`](https://huggingface.co/DavidAU/Qwen3-8B-96k-Context-3X-Medium-Plus) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
16 |
Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-8B-96k-Context-3X-Medium-Plus) for more details on the model.
|
17 |
|
18 |
+
---
|
19 |
+
Qwen3 - 8B set at 96k (98304) context by extended YARN.
|
20 |
+
|
21 |
+
This is a collection of models of Qwen 3 8Bs with max context set at 64k, 96k, 128k, 192k, 256k, and 320k.
|
22 |
+
|
23 |
+
By changing the maximum context (from 32k) to different values this changes:
|
24 |
+
|
25 |
+
- reasoning
|
26 |
+
- prose, sentence, and output
|
27 |
+
- general performance (up or down, depending on use case)
|
28 |
+
- longer and/or more detailed outputs, especially long form.
|
29 |
+
|
30 |
+
---
|
31 |
## Use with llama.cpp
|
32 |
Install llama.cpp through brew (works on Mac and Linux)
|
33 |
|