This was an experiment. I got the delta between mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated and meta-llama/Llama-3.1-8B-Instruct and applied that on the common layers from ICTNLP/Llama-3.1-8B-Omni.

The intention was to see if the Omni model can gain abliterated functions. The result (this model) is coherent, but it's not 100% uncensored. The reason most probably has to do with the way the Omni model was trained.

Downloads last month
13
Safetensors
Model size
9.11B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.