Bias in large language models (LLMs) is a growing concern, particularly in sensitive customer-facing industries where fairness and compliance are critical. With recent buzz around DeepSeek, we took the opportunity to showcase Hirundo’s bias unlearning capabilities on DeepSeek-R1-Distill-Llama-8B. Our results demonstrate that, even with new and emerging models, we can significantly reduce bias—up to 76% reduction as compared to its original state—without compromising model utility on other tasks such as logical QA and reasoning.

Downloads last month
155
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for hirundo-io/DeepSeek-R1-Distill-Llama-8B-Debiased

Quantizations
2 models