GGUF
QwQ-32B
reasoning
thinking
r1
cot
deepseek
Qwen2.5
Hermes
DeepHermes
DeepSeek
DeepSeek-R1-Distill
128k context
Merge
Uncensored
creative
general usage
problem solving
brainstorming
solve riddles
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
story
writing
fiction
roleplaying
swearing
horror
Qwen 2.5
mergekit
conversational
Update README.md
Browse files
README.md
CHANGED
@@ -74,7 +74,7 @@ In a 256 point precision (per layer) DARE TIES merge.
|
|
74 |
|
75 |
128k context, ChatML or Jinja Template required.
|
76 |
|
77 |
-
Special thanks to team "mradermacher" ( https://huggingface.co/mradermacher ) for quanting the model
|
78 |
|
79 |
<B>Model Requirements:</B>
|
80 |
|
|
|
74 |
|
75 |
128k context, ChatML or Jinja Template required.
|
76 |
|
77 |
+
Special thanks to team "mradermacher" ( https://huggingface.co/mradermacher ) for quanting the model.
|
78 |
|
79 |
<B>Model Requirements:</B>
|
80 |
|