This is a quantized version of distil-whisper-medium.en, optimized with ctranslate2 to use 8-bit integers for faster inference while maintaining accuracy. Ideal for speech-to-text tasks where speed is critical.

Downloads last month
4
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for Rejekts/fastest-distil-whisper-medium.en

Finetuned
(5)
this model