qwq-32b-abliterated-lora
This is a LoRA extracted from a language model. It was extracted using mergekit.
LoRA Details
This LoRA adapter was extracted from huihui-ai/QwQ-32B-abliterated and uses Qwen/QwQ-32B as a base.
Parameters
The following command was used to extract this LoRA adapter:
/venv/main/bin/mergekit-extract-lora --model huihui-ai/QwQ-32B-abliterated --base-model Qwen/QwQ-32B --out-path qwq-32b-abliterated-lora --cuda --max-rank 32
- Downloads last month
- 68
Hardware compatibility
Log In
to view the estimation
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support