YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

A fine-tune of google/gemma-3-4b-it using a new experimental technique for automatic unslopping.

The intention is to make the most common slop words & phrases much less frequent, with minimal impact to the model otherwise.

It won't remove slop entirely. The technique only targets over-represented words & phrases, not stylistic or thematic slop.

This model should serve as a good base for further fine-tuning.

Downloads last month
19
Safetensors
Model size
4.3B params
Tensor type
F32
BF16
F16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for sam-paech/gemma-3-4b-it-antislop-exp72

Quantizations
1 model