Informing About a New Optimized Model
#13
by
OdyAsh
- opened
Hello all!
Alhamdullilah, I've managed to create a CTranslate2 version of the whisper-base-ar-quran
model called faster-whisper-base-ar-quran
model, and the steps taken to do this conversion can be found in my π€'s model card here.
The advantage of CTranslate2 models is that they can work with WhisperX repo, as shown here, which means fast inference time, as shown here.
I don't know how I could contribute this to your assets. So, if you find my contribution worthy, here are some options:
- Create a new repo (i.e.,
model
) on your account calledfaster-whisper-base-ar-quran
that consists of the files that I have here. - Dedicate a section on this repo's model card, mention any current and future models that are related to this
whisper-base-ar-quran
model, then add a link to my repo in this section. - PR my changes into this repo (I don't suggest this, as past user may require the old implementation, so we want to ensure backward compatibility)
- Any other option that you think makes more sense :]
Again, I don't know what the best approach is, that's why I opened this as a discussion
, not a PR
:]
.
So by all means, tell me your thoughts! π