Update README.md
Browse files
README.md
CHANGED
@@ -113,6 +113,10 @@ We evaluated the speculative decoding setup for Whisper-large-v3-singlish on the
|
|
113 |
| SASRBench-v1 | 38.00% | 42.00% |
|
114 |
| AMI | 38.00% | 43.00% |
|
115 |
|
|
|
|
|
|
|
|
|
116 |
## Disclaimer
|
117 |
|
118 |
While this model has been fine-tuned to better recognize Singlish, users may experience inaccuracies, biases, or unexpected outputs, particularly in challenging audio conditions or with speakers using non-standard variations. Use of this model is at your own risk; the developers and distributors are not liable for any consequences arising from its use. Please validate results before deploying in any sensitive or production environment.
|
|
|
113 |
| SASRBench-v1 | 38.00% | 42.00% |
|
114 |
| AMI | 38.00% | 43.00% |
|
115 |
|
116 |
+
### Conclusion
|
117 |
+
|
118 |
+
While it does not outperform Large-Turbo in WER, the Draft-enhanced Large model demonstrates strong speculative acceptance rates (~38–43%), indicating meaningful potential for runtime gains through early prediction acceptance. In latency-sensitive applications, it offers a compelling middle ground between the high accuracy of Large-Turbo and the slower inference of standard decoding.
|
119 |
+
|
120 |
## Disclaimer
|
121 |
|
122 |
While this model has been fine-tuned to better recognize Singlish, users may experience inaccuracies, biases, or unexpected outputs, particularly in challenging audio conditions or with speakers using non-standard variations. Use of this model is at your own risk; the developers and distributors are not liable for any consequences arising from its use. Please validate results before deploying in any sensitive or production environment.
|