Decensored using a custom training script guided by activations, similar to ablation/"abliteration" scripts but not exactly the same approach.

I've found this effect to be stronger than most abliteration scripts, so please use responsibly etc etc.

The training script is released under the MIT license: https://github.com/nkpz/DeLMAT

Downloads last month
67
Safetensors
Model size
8.03B params
Tensor type
FP16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nkpz/Llama-3.1-8B-Instruct-Uncensored-DeLMAT

Finetuned
(655)
this model
Quantizations
2 models