πŸ’Ž Gemma 3 12B IT QAT Abliterated

image/png

Gemma 3 QAT Abliterated 1B β€’ 4B β€’ 12B β€’ 27B

This is an uncensored version of google/gemma-3-12b-it-qat-q4_0-unquantized created with a new abliteration technique. See this article to know more about abliteration.

This is a new, improved version that targets refusals with enhanced accuracy.

I recommend using these generation parameters: temperature=1.0, top_k=64, top_p=0.95.

βœ‚οΈ Abliteration

image/png

The refusal direction is computed by comparing the residual streams between target (harmful) and baseline (harmless) samples. The hidden states of target modules (e.g., o_proj) are orthogonalized to subtract this refusal direction with a given weight factor. These weight factors follow a normal distribution with a certain spread and peak layer. Modules can be iteratively orthogonalized in batches, or the refusal direction can be accumulated to save memory.

Finally, I used a hybrid evaluation with a dedicated test set to calculate the acceptance rate. This uses both a dictionary approach and NousResearch/Minos-v1. The goal is to obtain an acceptance rate >90% and still produce coherent outputs.

Downloads last month
666
GGUF
Model size
11.8B params
Architecture
gemma3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mlabonne/gemma-3-12b-it-qat-abliterated-GGUF

Quantized
(8)
this model