File size: 5,383 Bytes
2eab1c3
6dc5cdf
818897e
6c21863
 
 
 
2249846
b3a32cd
0912e2c
 
b3a32cd
 
 
0912e2c
 
b3a32cd
 
6c21863
b3a32cd
6c21863
0912e2c
6c21863
b3a32cd
0912e2c
256ec31
6dc5cdf
 
8c6c010
6dc5cdf
8c6c010
 
 
 
6dc5cdf
104f7bd
 
 
cc92e97
 
 
104f7bd
 
cc92e97
 
 
 
1aa1b9e
104f7bd
cc92e97
8c6c010
cc92e97
 
 
 
 
 
 
8c6c010
cc92e97
4cb3660
cc92e97
 
 
 
 
 
 
 
 
5355f8f
 
 
cc92e97
8c6c010
104f7bd
8c6c010
104f7bd
 
 
 
 
bee27c6
 
 
3679fc2
5355f8f
 
104f7bd
8c6c010
104f7bd
 
 
 
 
 
 
 
1cea20a
8c6c010
104f7bd
 
 
 
 
 
 
bee27c6
104f7bd
 
 
 
a1ba5e6
8c6c010
6dc5cdf
8c6c010
6dc5cdf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
# Whisper Streaming Web: Real-time Speech-to-Text with FastAPI WebSocket

This project is based on [Whisper Streaming](https://github.com/ufal/whisper_streaming) and lets you transcribe audio directly from your browser. Simply launch the local server and grant microphone access. Everything runs locally on your machine ✨

<p align="center">
  <img src="src/web/demo.png" alt="Demo Screenshot" width="600">
</p>

### Differences from [Whisper Streaming](https://github.com/ufal/whisper_streaming)

#### 🌐 **Web & API**  
- **Built-in Web UI** – No frontend setup required, just open your browser and start transcribing.  
- **FastAPI WebSocket Server** – Real-time speech-to-text processing with async FFmpeg streaming.  
- **JavaScript Client** – Ready-to-use MediaRecorder implementation for seamless client-side integration.

#### ⚙️ **Core Improvements**  
- **Buffering Preview** – Displays unvalidated transcription segments for immediate feedback.  
- **Multi-User Support** – Handles multiple users simultaneously without conflicts.  
- **MLX Whisper Backend** – Optimized for Apple Silicon for faster local processing.  
- **Enhanced Sentence Segmentation** – Improved buffer trimming for better accuracy across languages.  
- **Extended Logging** – More detailed logs to improve debugging and monitoring.  

#### 🎙️ **Advanced Features**  
- **Real-Time Diarization** – Identify different speakers in real time using [Diart](https://github.com/juanmc2005/diart).  


## Installation

1. **Clone the Repository**:

   ```bash
   git clone https://github.com/QuentinFuxa/whisper_streaming_web
   cd whisper_streaming_web
   ```


### How to Launch the Server

1. **Dependencies**:

- Install required dependences :

    ```bash
    # Whisper streaming required dependencies
    pip install librosa soundfile

    # Whisper streaming web required dependencies
    pip install fastapi ffmpeg-python
    ```
- Install at least one whisper backend among:

    ```
   whisper
   whisper-timestamped
   faster-whisper (faster backend on NVIDIA GPU)
   mlx-whisper (faster backend on Apple Silicon)
   ```
- Optionnal dependencies

    ```
    # If you want to use VAC (Voice Activity Controller). Useful for preventing hallucinations
    torch
   
    # If you choose sentences as buffer trimming strategy
    mosestokenizer
    wtpsplit
    tokenize_uk # If you work with Ukrainian text

    # If you want to run the server using uvicorn (recommended)
    uvicorn

    # If you want to use diarization
    diart
    ```


3. **Run the FastAPI Server**:

    ```bash
    python whisper_fastapi_online_server.py --host 0.0.0.0 --port 8000
    ```

    - `--host` and `--port` let you specify the server’s IP/port. 
    - `-min-chunk-size` sets the minimum chunk size for audio processing. Make sure this value aligns with the chunk size selected in the frontend. If not aligned, the system will work but may unnecessarily over-process audio data.
    - For a full list of configurable options, run `python whisper_fastapi_online_server.py -h`
    - `--transcription`, default to True. Change to False if you want to run only diarization
    - `--diarization`, default to False, let you choose whether or not you want to run diarization in parallel
    - For other parameters, look at [whisper streaming](https://github.com/ufal/whisper_streaming) readme.

4. **Open the Provided HTML**:

    - By default, the server root endpoint `/` serves a simple `live_transcription.html` page.  
    - Open your browser at `http://localhost:8000` (or replace `localhost` and `8000` with whatever you specified).  
    - The page uses vanilla JavaScript and the WebSocket API to capture your microphone and stream audio to the server in real time.

### How the Live Interface Works

- Once you **allow microphone access**, the page records small chunks of audio using the **MediaRecorder** API in **webm/opus** format.  
- These chunks are sent over a **WebSocket** to the FastAPI endpoint at `/asr`.  
- The Python server decodes `.webm` chunks on the fly using **FFmpeg** and streams them into the **whisper streaming** implementation for transcription.  
- **Partial transcription** appears as soon as enough audio is processed. The “unvalidated” text is shown in **lighter or grey color** (i.e., an ‘aperçu’) to indicate it’s still buffered partial output. Once Whisper finalizes that segment, it’s displayed in normal text.  
- You can watch the transcription update in near real time, ideal for demos, prototyping, or quick debugging.

### Deploying to a Remote Server

If you want to **deploy** this setup:

1. **Host the FastAPI app** behind a production-grade HTTP(S) server (like **Uvicorn + Nginx** or Docker). If you use HTTPS, use "wss" instead of "ws" in WebSocket URL.
2. The **HTML/JS page** can be served by the same FastAPI app or a separate static host.  
3. Users open the page in **Chrome/Firefox** (any modern browser that supports MediaRecorder + WebSocket).  

No additional front-end libraries or frameworks are required. The WebSocket logic in `live_transcription.html` is minimal enough to adapt for your own custom UI or embed in other pages.

## Acknowledgments

This project builds upon the foundational work of the Whisper Streaming project. We extend our gratitude to the original authors for their contributions.