Commit
·
e1b0e14
1
Parent(s):
2de0900
lru_cache didn't work with Python 3.6.9, openai api needs py version
Browse files- README.md +1 -1
- whisper_online.py +1 -1
README.md
CHANGED
|
@@ -43,7 +43,7 @@ Please, cite us. [ACL Anthology](https://aclanthology.org/2023.ijcnlp-demo.3/),
|
|
| 43 |
Alternative, less restrictive, but slower backend is [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped): `pip install git+https://github.com/linto-ai/whisper-timestamped`
|
| 44 |
|
| 45 |
Thirdly, it's also possible to run this software from the [OpenAI Whisper API](https://platform.openai.com/docs/api-reference/audio/createTranscription). This solution is fast and requires no GPU, just a small VM will suffice, but you will need to pay OpenAI for api access. Also note that, since each audio fragment is processed multiple times, the [price](https://openai.com/pricing) will be higher than obvious from the pricing page, so keep an eye on costs while using. Setting a higher chunk-size will reduce costs significantly.
|
| 46 |
-
Install with: `pip install openai`
|
| 47 |
|
| 48 |
For running with the openai-api backend, make sure that your [OpenAI api key](https://platform.openai.com/api-keys) is set in the `OPENAI_API_KEY` environment variable. For example, before running, do: `export OPENAI_API_KEY=sk-xxx` with *sk-xxx* replaced with your api key.
|
| 49 |
|
|
|
|
| 43 |
Alternative, less restrictive, but slower backend is [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped): `pip install git+https://github.com/linto-ai/whisper-timestamped`
|
| 44 |
|
| 45 |
Thirdly, it's also possible to run this software from the [OpenAI Whisper API](https://platform.openai.com/docs/api-reference/audio/createTranscription). This solution is fast and requires no GPU, just a small VM will suffice, but you will need to pay OpenAI for api access. Also note that, since each audio fragment is processed multiple times, the [price](https://openai.com/pricing) will be higher than obvious from the pricing page, so keep an eye on costs while using. Setting a higher chunk-size will reduce costs significantly.
|
| 46 |
+
Install with: `pip install openai` , [requires Python >=3.8](https://pypi.org/project/openai/).
|
| 47 |
|
| 48 |
For running with the openai-api backend, make sure that your [OpenAI api key](https://platform.openai.com/api-keys) is set in the `OPENAI_API_KEY` environment variable. For example, before running, do: `export OPENAI_API_KEY=sk-xxx` with *sk-xxx* replaced with your api key.
|
| 49 |
|
whisper_online.py
CHANGED
|
@@ -12,7 +12,7 @@ import math
|
|
| 12 |
|
| 13 |
logger = logging.getLogger(__name__)
|
| 14 |
|
| 15 |
-
@lru_cache
|
| 16 |
def load_audio(fname):
|
| 17 |
a, _ = librosa.load(fname, sr=16000, dtype=np.float32)
|
| 18 |
return a
|
|
|
|
| 12 |
|
| 13 |
logger = logging.getLogger(__name__)
|
| 14 |
|
| 15 |
+
@lru_cache(10**6)
|
| 16 |
def load_audio(fname):
|
| 17 |
a, _ = librosa.load(fname, sr=16000, dtype=np.float32)
|
| 18 |
return a
|