YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

A fine-tune of Mistral-Small-3_2-24B-Instruct-2506 using a new experimental technique for automatic unslopping.

The intention is to make the most common slop words & phrases much less frequent, with minimal impact to the model otherwise.

It won't remove slop entirely. The technique only targets over-represented words & phrases, not stylistic or thematic slop.

This model should serve as a good base for further fine-tuning.

Downloads last month
163
Safetensors
Model size
24B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sam-paech/Mistral-Small-3_2-24B-Instruct-2506-antislop

Quantizations
2 models