Transformers
GGUF
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prose
vivid writing
fiction
roleplaying
bfloat16
swearing
rp
qwen3
horror
finetune
Merge
conversational
File size: 4,518 Bytes
11e3117 874ade2 11e3117 7aec20c 11e3117 6aea115 11e3117 874ade2 2378ea8 11e3117 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
---
base_model: DavidAU/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001
language:
- en
- fr
- zh
- de
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- creative
- creative writing
- fiction writing
- plot generation
- sub-plot generation
- fiction writing
- story generation
- scene continue
- storytelling
- fiction story
- science fiction
- romance
- all genres
- story
- writing
- vivid prose
- vivid writing
- fiction
- roleplaying
- bfloat16
- swearing
- rp
- qwen3
- horror
- finetune
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DavidAU/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001-GGUF/resolve/main/Qwen3-4B-Fiction-On-Fire-Series-7-Model-1001.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|