This is a competitive coding model that should be better than qwen-coder-32b-instruct which we're running on github.gg
36B experimental merge was to fix the repetition issues with the huihui-ai coder abliterated.
There is a draft model to go with this one for speculative decoding and chain of thought reasoning: https://huggingface.co/nisten/qwen2.5-coder-7b-abliterated-128k-AWQ
Using the above 4bit 7b in conjuction with the 36b is meant to setup a chain-of-thought reasoner, evaluator similar to what O1-O3 is probably doing. This way the 7b 4bit only uses up an extra 4-6Gb on the gpu, but greatly both speeds up speculative decoding AND also chain-of-throught evals.
I.e. in this case the model was able to write an almost working heh.. chat interface for itself in one shot. And this was the WHOLE interface, including python and html and styling and api calls.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
- Qwen/Qwen2.5-Coder-32B-Instruct
- Qwen/Qwen2.5-Coder-32B
- huihui-ai/Qwen2.5-32B-Instruct-abliterated
- Downloads last month
- 41
Model tree for nisten/tqwendo-36b
Base model
Qwen/Qwen2.5-32B