Bias in large language models (LLMs) is a growing concern, particularly in sensitive customer-facing industries where fairness and compliance are critical. With recent buzz around DeepSeek, we took the opportunity to showcase Hirundo’s bias unlearning capabilities on DeepSeek-R1-Distill-Llama-8B. Our results demonstrate that, even with new and emerging models, we can significantly reduce bias—up to 76% reduction as compared to its original state—without compromising model utility on other tasks such as logical QA and reasoning.

Downloads last month
13
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hirundo-io/DeepSeek-R1-Distill-Llama-8B-Debiased

Quantizations
2 models