qwq-32b-abliterated-lora

This is a LoRA extracted from a language model. It was extracted using mergekit.

LoRA Details

This LoRA adapter was extracted from huihui-ai/QwQ-32B-abliterated and uses Qwen/QwQ-32B as a base.

Parameters

The following command was used to extract this LoRA adapter:

/venv/main/bin/mergekit-extract-lora --model huihui-ai/QwQ-32B-abliterated --base-model Qwen/QwQ-32B --out-path qwq-32b-abliterated-lora --cuda --max-rank 32
Downloads last month
68
GGUF
Model size
269M params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for chenrm/qwq-32b-abliterated-lora

Base model

Qwen/Qwen2.5-32B
Finetuned
Qwen/QwQ-32B
Adapter
(21)
this model