Informing About a New Optimized Model

#13
by OdyAsh - opened

Hello all!

Alhamdullilah, I've managed to create a CTranslate2 version of the whisper-base-ar-quran model called faster-whisper-base-ar-quran model, and the steps taken to do this conversion can be found in my πŸ€—'s model card here.

The advantage of CTranslate2 models is that they can work with WhisperX repo, as shown here, which means fast inference time, as shown here.

I don't know how I could contribute this to your assets. So, if you find my contribution worthy, here are some options:

  1. Create a new repo (i.e., model) on your account called faster-whisper-base-ar-quran that consists of the files that I have here.
  2. Dedicate a section on this repo's model card, mention any current and future models that are related to this whisper-base-ar-quran model, then add a link to my repo in this section.
  3. PR my changes into this repo (I don't suggest this, as past user may require the old implementation, so we want to ensure backward compatibility)
  4. Any other option that you think makes more sense :]

Again, I don't know what the best approach is, that's why I opened this as a discussion, not a PR :].

So by all means, tell me your thoughts! πŸ™Œ

Sign up or log in to comment