Text Generation
GGUF
English
reasoning
thinking
uncensored
gated
mixture of experts
expert gate controls
expert named controls
Mixture of Experts
8x3B
Llama 3.2 MOE
NEO Imatrix
128k context
creative
creative writing
fiction writing
plot generation
sub-plot generation
story generation
scene continue
storytelling
fiction story
science fiction
romance
all genres
story
writing
vivid prosing
vivid writing
fiction
roleplaying
float32
swearing
rp
horror
mergekit
llama-3
llama-3.2
imatrix
Base version
#1
by
lazyDataScientist
- opened
Would it be possible to have a finetune of your model without instruction finetuning?
All the models, including the "base" model in the MOE are instruct.
All would have to be tuned from base/chat Llama 3.2 -> then re-"MOEed" so to speak.
Is this what you mean?
or do you mean without "gating" in the MOE ?
For the latter; the "reg" version would work.
Please clarify if I am off base here ;