Triangle104 commited on
Commit
3f8d853
·
verified ·
1 Parent(s): eedd9b8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -0
README.md CHANGED
@@ -16,6 +16,24 @@ base_model: DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored
16
  This model was converted to GGUF format from [`DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored`](https://huggingface.co/DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored) for more details on the model.
18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  ## Use with llama.cpp
20
  Install llama.cpp through brew (works on Mac and Linux)
21
 
 
16
  This model was converted to GGUF format from [`DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored`](https://huggingface.co/DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
  Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored) for more details on the model.
18
 
19
+ ---
20
+ This repo is for Goekdeniz-Guelmez's excellent "Josiefied-Qwen3-8B-abliterated-v1", modified from 32k (32768) context to 64 k (65536) context modified using YARN as per tech notes at Qwen repo.
21
+
22
+ ORG model repo for this fine tune:
23
+
24
+ [ https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1 ]
25
+
26
+ Max context on this version is : 64k (65536)
27
+
28
+ Suggest min context limit of : 8k to 16k for "thinking" / "output".
29
+
30
+ Use Jinja Template or CHATML template.
31
+
32
+ Please refer the QWEN model card for details, benchmarks, how to use, settings, turning reasoning on/off/ system roles etc etc :
33
+
34
+ [ https://huggingface.co/Qwen/Qwen3-8B ]
35
+
36
+ ---
37
  ## Use with llama.cpp
38
  Install llama.cpp through brew (works on Mac and Linux)
39