Add support for docker
Browse files- README.md +18 -0
- dockerfile +12 -0
README.md
CHANGED
|
@@ -22,4 +22,22 @@ pip install -r requirements.txt
|
|
| 22 |
Finally, run the full version (no audio length restrictions) of the app:
|
| 23 |
```
|
| 24 |
python app-full.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
```
|
|
|
|
| 22 |
Finally, run the full version (no audio length restrictions) of the app:
|
| 23 |
```
|
| 24 |
python app-full.py
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
# Docker
|
| 28 |
+
|
| 29 |
+
To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU. Then
|
| 30 |
+
check out this repository and build an image:
|
| 31 |
+
```
|
| 32 |
+
sudo docker build -t whisper-webui:1 .
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
You can then start the WebUI with GPU support like so:
|
| 36 |
+
```
|
| 37 |
+
sudo docker run -d --gpus=all -p 7860:7860 whisper-webui:1
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
Leave out "--gpus=all" if you don't have access to a GPU with enough memory, and are fine with running it on the CPU only:
|
| 41 |
+
```
|
| 42 |
+
sudo docker run -d -p 7860:7860 whisper-webui:1
|
| 43 |
```
|
dockerfile
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM huggingface/transformers-pytorch-gpu
|
| 2 |
+
EXPOSE 7860
|
| 3 |
+
|
| 4 |
+
ADD . /opt/whisper-webui/
|
| 5 |
+
RUN python3 -m pip install -r /opt/whisper-webui/requirements.txt
|
| 6 |
+
|
| 7 |
+
# Note: Models will be downloaded on demand to the directory /root/.cache/whisper.
|
| 8 |
+
# You can also bind this directory in the container to somewhere on the host.
|
| 9 |
+
|
| 10 |
+
WORKDIR /opt/whisper-webui/
|
| 11 |
+
ENTRYPOINT ["python3"]
|
| 12 |
+
CMD ["app-network.py"]
|