游雁
commited on
Commit
•
cad2f75
1
Parent(s):
8477679
add
Browse files
README.md
CHANGED
@@ -7,126 +7,85 @@ tasks:
|
|
7 |
pipeline_tag: voice-activity-detection
|
8 |
---
|
9 |
|
10 |
-
# FSMN-Monophone VAD 模型介绍
|
11 |
|
12 |
-
|
13 |
|
14 |
-
## Highlight
|
15 |
-
- 16k中文通用VAD模型:可用于检测长语音片段中有效语音的起止时间点。
|
16 |
-
- 基于[Paraformer-large长音频模型](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary)场景的使用
|
17 |
-
- 基于[FunASR框架](https://github.com/alibaba-damo-academy/FunASR),可进行ASR,VAD,[中文标点](https://www.modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/summary)的自由组合
|
18 |
-
- 基于音频数据的有效语音片段起止时间点检测
|
19 |
|
20 |
-
|
21 |
-
<strong>[FunASR](https://github.com/alibaba-damo-academy/FunASR)</strong>希望在语音识别的学术研究和工业应用之间架起一座桥梁。通过发布工业级语音识别模型的训练和微调,研究人员和开发人员可以更方便地进行语音识别模型的研究和生产,并推动语音识别生态的发展。让语音识别更有趣!
|
22 |
|
23 |
-
[**github仓库**](https://github.com/alibaba-damo-academy/FunASR)
|
24 |
-
| [**最新动态**](https://github.com/alibaba-damo-academy/FunASR#whats-new)
|
25 |
-
| [**环境安装**](https://github.com/alibaba-damo-academy/FunASR#installation)
|
26 |
-
| [**服务部署**](https://www.funasr.com)
|
27 |
-
| [**模型库**](https://github.com/alibaba-damo-academy/FunASR/tree/main/model_zoo)
|
28 |
-
| [**联系我们**](https://github.com/alibaba-damo-academy/FunASR#contact)
|
29 |
|
|
|
30 |
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
-
FSMN-Monophone VAD是达摩院语音团队提出的高效语音端点检测模型,用于检测输入音频中有效语音的起止时间点信息,并将检测出来的有效音频片段输入识别引擎进行识别,减少无效语音带来的识别错误。
|
34 |
|
35 |
-
<
|
36 |
-
|
|
|
|
|
37 |
|
38 |
-
FSMN-Monophone VAD模型结构如上图所示:模型结构层面,FSMN模型结构建模时可考虑上下文信息,训练和推理速度快,且时延可控;同时根据VAD模型size以及低时延的要求,对FSMN的网络结构、右看帧数进行了适配。在建模单元层面,speech信息比较丰富,仅用单类来表征学习能力有限,我们将单一speech类升级为Monophone。建模单元细分,可以避免参数平均,抽象学习能力增强,区分性更好。
|
39 |
|
40 |
-
|
|
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
- wav文件url,例如:https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav
|
45 |
-
- wav二进制数据,格式bytes,例如:用户直接从文件里读出bytes数据或者是麦克风录出bytes数据。
|
46 |
-
- 已解析的audio音频,例如:audio, rate = soundfile.read("vad_example_zh.wav"),类型为numpy.ndarray或者torch.Tensor。
|
47 |
-
- wav.scp文件,需符合如下要求:
|
48 |
-
|
49 |
-
```sh
|
50 |
-
cat wav.scp
|
51 |
-
vad_example1 data/test/audios/vad_example1.wav
|
52 |
-
vad_example2 data/test/audios/vad_example2.wav
|
53 |
-
...
|
54 |
```
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
from modelscope.pipelines import pipeline
|
60 |
-
from modelscope.utils.constant import Tasks
|
61 |
-
|
62 |
-
inference_pipeline = pipeline(
|
63 |
-
task=Tasks.voice_activity_detection,
|
64 |
-
model='iic/speech_fsmn_vad_zh-cn-16k-common-pytorch',
|
65 |
-
model_revision="v2.0.4",
|
66 |
-
)
|
67 |
-
|
68 |
-
segments_result = inference_pipeline(input='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav')
|
69 |
-
print(segments_result)
|
70 |
```
|
|
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
```python
|
75 |
-
segments_result = inference_pipeline(input='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.pcm', fs=16000)
|
76 |
```
|
77 |
|
78 |
-
|
|
|
79 |
|
80 |
-
|
81 |
-
inference_pipeline(input="wav.scp", output_dir='./output_dir')
|
82 |
-
```
|
83 |
-
识别结果输出路径结构如下:
|
84 |
|
85 |
-
```sh
|
86 |
-
tree output_dir/
|
87 |
-
output_dir/
|
88 |
-
└── 1best_recog
|
89 |
-
└── text
|
90 |
|
91 |
-
|
92 |
-
|
93 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
|
95 |
-
- 若输入音频为已解析的audio音频,api调用方式可参考如下范例:
|
96 |
|
97 |
-
```python
|
98 |
-
import soundfile
|
99 |
|
100 |
-
waveform, sample_rate = soundfile.read("vad_example_zh.wav")
|
101 |
-
segments_result = inference_pipeline(input=waveform)
|
102 |
-
print(segments_result)
|
103 |
-
```
|
104 |
-
|
105 |
-
- VAD常用参数调���说明(参考:vad.yaml文件):
|
106 |
-
- max_end_silence_time:尾部连续检测到多长时间静音进行尾点判停,参数范围500ms~6000ms,默认值800ms(该值过低容易出现语音提前截断的情况)。
|
107 |
-
- speech_noise_thres:speech的得分减去noise的得分大于此值则判断为speech,参数范围:(-1,1)
|
108 |
-
- 取值越趋于-1,噪音被误判定为语音的概率越大,FA越高
|
109 |
-
- 取值越趋于+1,语音被误判定为噪音的概率越大,Pmiss越高
|
110 |
-
- 通常情况下,该值会根据当前模型在长语音测试集上的效果取balance
|
111 |
-
|
112 |
|
|
|
|
|
113 |
|
114 |
|
115 |
-
|
|
|
116 |
|
117 |
-
|
118 |
|
119 |
-
###
|
120 |
-
在命令行终端执行:
|
121 |
|
122 |
```shell
|
123 |
-
funasr
|
124 |
```
|
125 |
|
126 |
-
|
127 |
|
128 |
-
###
|
129 |
-
#### 非实时语音识别
|
130 |
```python
|
131 |
from funasr import AutoModel
|
132 |
# paraformer-zh is a multi-functional asr model
|
@@ -137,14 +96,13 @@ model = AutoModel(model="paraformer-zh", model_revision="v2.0.4",
|
|
137 |
# spk_model="cam++", spk_model_revision="v2.0.2",
|
138 |
)
|
139 |
res = model.generate(input=f"{model.model_path}/example/asr_example.wav",
|
140 |
-
|
141 |
-
|
142 |
print(res)
|
143 |
```
|
144 |
-
|
145 |
-
|
146 |
-
#### 实时语音识别
|
147 |
|
|
|
148 |
```python
|
149 |
from funasr import AutoModel
|
150 |
|
@@ -169,21 +127,18 @@ for i in range(total_chunk_num):
|
|
169 |
res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
|
170 |
print(res)
|
171 |
```
|
|
|
172 |
|
173 |
-
|
174 |
-
|
175 |
-
#### 语音端点检测(非实时)
|
176 |
```python
|
177 |
from funasr import AutoModel
|
178 |
|
179 |
model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
|
180 |
-
|
181 |
wav_file = f"{model.model_path}/example/asr_example.wav"
|
182 |
res = model.generate(input=wav_file)
|
183 |
print(res)
|
184 |
```
|
185 |
-
|
186 |
-
#### 语音端点检测(实时)
|
187 |
```python
|
188 |
from funasr import AutoModel
|
189 |
|
@@ -205,57 +160,23 @@ for i in range(total_chunk_num):
|
|
205 |
if len(res[0]["value"]):
|
206 |
print(res)
|
207 |
```
|
208 |
-
|
209 |
-
#### 标点恢复
|
210 |
```python
|
211 |
from funasr import AutoModel
|
212 |
|
213 |
model = AutoModel(model="ct-punc", model_revision="v2.0.4")
|
214 |
-
|
215 |
res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
|
216 |
print(res)
|
217 |
```
|
218 |
-
|
219 |
-
#### 时间戳预测
|
220 |
```python
|
221 |
from funasr import AutoModel
|
222 |
|
223 |
model = AutoModel(model="fa-zh", model_revision="v2.0.4")
|
224 |
-
|
225 |
wav_file = f"{model.model_path}/example/asr_example.wav"
|
226 |
text_file = f"{model.model_path}/example/text.txt"
|
227 |
res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
|
228 |
print(res)
|
229 |
```
|
230 |
|
231 |
-
|
232 |
-
|
233 |
-
|
234 |
-
## 微调
|
235 |
-
|
236 |
-
详细用法([示例](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining))
|
237 |
-
|
238 |
-
|
239 |
-
|
240 |
-
|
241 |
-
|
242 |
-
## 使用方式以及适用范围
|
243 |
-
|
244 |
-
运行范围
|
245 |
-
- 支持Linux-x86_64、Mac和Windows运行。
|
246 |
-
|
247 |
-
使用方式
|
248 |
-
- 直接推理:可以直接对长语音数据进行计算,有效语音片段的起止时间点信息(单位:ms)。
|
249 |
-
|
250 |
-
## 相关论文以及引用信息
|
251 |
-
|
252 |
-
```BibTeX
|
253 |
-
@inproceedings{zhang2018deep,
|
254 |
-
title={Deep-FSMN for large vocabulary continuous speech recognition},
|
255 |
-
author={Zhang, Shiliang and Lei, Ming and Yan, Zhijie and Dai, Lirong},
|
256 |
-
booktitle={2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
|
257 |
-
pages={5869--5873},
|
258 |
-
year={2018},
|
259 |
-
organization={IEEE}
|
260 |
-
}
|
261 |
-
```
|
|
|
7 |
pipeline_tag: voice-activity-detection
|
8 |
---
|
9 |
|
|
|
10 |
|
11 |
+
# FunASR: A Fundamental End-to-End Speech Recognition Toolkit
|
12 |
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
+
[![PyPI](https://img.shields.io/pypi/v/funasr)](https://pypi.org/project/funasr/)
|
|
|
15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
+
<strong>FunASR</strong> hopes to build a bridge between academic research and industrial applications on speech recognition. By supporting the training & finetuning of the industrial-grade speech recognition model, researchers and developers can conduct research and production of speech recognition models more conveniently, and promote the development of speech recognition ecology. ASR for Fun!
|
18 |
|
19 |
+
[**Highlights**](#highlights)
|
20 |
+
| [**News**](https://github.com/alibaba-damo-academy/FunASR#whats-new)
|
21 |
+
| [**Installation**](#installation)
|
22 |
+
| [**Quick Start**](#quick-start)
|
23 |
+
| [**Runtime**](./runtime/readme.md)
|
24 |
+
| [**Model Zoo**](#model-zoo)
|
25 |
+
| [**Contact**](#contact)
|
26 |
|
|
|
27 |
|
28 |
+
<a name="highlights"></a>
|
29 |
+
## Highlights
|
30 |
+
- FunASR is a fundamental speech recognition toolkit that offers a variety of features, including speech recognition (ASR), Voice Activity Detection (VAD), Punctuation Restoration, Language Models, Speaker Verification, Speaker Diarization and multi-talker ASR. FunASR provides convenient scripts and tutorials, supporting inference and fine-tuning of pre-trained models.
|
31 |
+
- We have released a vast collection of academic and industrial pretrained models on the [ModelScope](https://www.modelscope.cn/models?page=1&tasks=auto-speech-recognition) and [huggingface](https://huggingface.co/FunASR), which can be accessed through our [Model Zoo](https://github.com/alibaba-damo-academy/FunASR/blob/main/docs/model_zoo/modelscope_models.md). The representative [Paraformer-large](https://www.modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary), a non-autoregressive end-to-end speech recognition model, has the advantages of high accuracy, high efficiency, and convenient deployment, supporting the rapid construction of speech recognition services. For more details on service deployment, please refer to the [service deployment document](runtime/readme_cn.md).
|
32 |
|
|
|
33 |
|
34 |
+
<a name="Installation"></a>
|
35 |
+
## Installation
|
36 |
|
37 |
+
```shell
|
38 |
+
pip3 install -U funasr
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
```
|
40 |
+
Or install from source code
|
41 |
+
``` sh
|
42 |
+
git clone https://github.com/alibaba/FunASR.git && cd FunASR
|
43 |
+
pip3 install -e ./
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
```
|
45 |
+
Install modelscope for the pretrained models (Optional)
|
46 |
|
47 |
+
```shell
|
48 |
+
pip3 install -U modelscope
|
|
|
|
|
49 |
```
|
50 |
|
51 |
+
## Model Zoo
|
52 |
+
FunASR has open-sourced a large number of pre-trained models on industrial data. You are free to use, copy, modify, and share FunASR models under the [Model License Agreement](./MODEL_LICENSE). Below are some representative models, for more models please refer to the [Model Zoo]().
|
53 |
|
54 |
+
(Note: 🤗 represents the Huggingface model zoo link, ⭐ represents the ModelScope model zoo link)
|
|
|
|
|
|
|
55 |
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
+
| Model Name | Task Details | Training Data | Parameters |
|
58 |
+
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------:|:--------------------------------:|:----------:|
|
59 |
+
| paraformer-zh <br> ([⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-zh-cn-16k-common-vocab8404-pytorch/summary) [🤗]() ) | speech recognition, with timestamps, non-streaming | 60000 hours, Mandarin | 220M |
|
60 |
+
| <nobr>paraformer-zh-streaming <br> ( [⭐](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-online/summary) [🤗]() )</nobr> | speech recognition, streaming | 60000 hours, Mandarin | 220M |
|
61 |
+
| paraformer-en <br> ( [⭐](https://www.modelscope.cn/models/damo/speech_paraformer-large-vad-punc_asr_nat-en-16k-common-vocab10020/summary) [🤗]() ) | speech recognition, with timestamps, non-streaming | 50000 hours, English | 220M |
|
62 |
+
| conformer-en <br> ( [⭐](https://modelscope.cn/models/damo/speech_conformer_asr-en-16k-vocab4199-pytorch/summary) [🤗]() ) | speech recognition, non-streaming | 50000 hours, English | 220M |
|
63 |
+
| ct-punc <br> ( [⭐](https://modelscope.cn/models/damo/punc_ct-transformer_cn-en-common-vocab471067-large/summary) [🤗]() ) | punctuation restoration | 100M, Mandarin and English | 1.1G |
|
64 |
+
| fsmn-vad <br> ( [⭐](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/summary) [🤗]() ) | voice activity detection | 5000 hours, Mandarin and English | 0.4M |
|
65 |
+
| fa-zh <br> ( [⭐](https://modelscope.cn/models/damo/speech_timestamp_prediction-v1-16k-offline/summary) [🤗]() ) | timestamp prediction | 5000 hours, Mandarin | 38M |
|
66 |
+
| cam++ <br> ( [⭐](https://modelscope.cn/models/iic/speech_campplus_sv_zh-cn_16k-common/summary) [🤗]() ) | speaker verification/diarization | 5000 hours | 7.2M |
|
67 |
|
|
|
68 |
|
|
|
|
|
69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
|
71 |
+
[//]: # ()
|
72 |
+
[//]: # (FunASR supports pre-trained or further fine-tuned models for deployment as a service. The CPU version of the Chinese offline file conversion service has been released, details can be found in [docs](funasr/runtime/docs/SDK_tutorial.md). More detailed information about service deployment can be found in the [deployment roadmap](funasr/runtime/readme_cn.md).)
|
73 |
|
74 |
|
75 |
+
<a name="quick-start"></a>
|
76 |
+
## Quick Start
|
77 |
|
78 |
+
Below is a quick start tutorial. Test audio files ([Mandarin](https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/vad_example.wav), [English]()).
|
79 |
|
80 |
+
### Command-line usage
|
|
|
81 |
|
82 |
```shell
|
83 |
+
funasr +model=paraformer-zh +vad_model="fsmn-vad" +punc_model="ct-punc" +input=asr_example_zh.wav
|
84 |
```
|
85 |
|
86 |
+
Notes: Support recognition of single audio file, as well as file list in Kaldi-style wav.scp format: `wav_id wav_pat`
|
87 |
|
88 |
+
### Speech Recognition (Non-streaming)
|
|
|
89 |
```python
|
90 |
from funasr import AutoModel
|
91 |
# paraformer-zh is a multi-functional asr model
|
|
|
96 |
# spk_model="cam++", spk_model_revision="v2.0.2",
|
97 |
)
|
98 |
res = model.generate(input=f"{model.model_path}/example/asr_example.wav",
|
99 |
+
batch_size_s=300,
|
100 |
+
hotword='魔搭')
|
101 |
print(res)
|
102 |
```
|
103 |
+
Note: `model_hub`: represents the model repository, `ms` stands for selecting ModelScope download, `hf` stands for selecting Huggingface download.
|
|
|
|
|
104 |
|
105 |
+
### Speech Recognition (Streaming)
|
106 |
```python
|
107 |
from funasr import AutoModel
|
108 |
|
|
|
127 |
res = model.generate(input=speech_chunk, cache=cache, is_final=is_final, chunk_size=chunk_size, encoder_chunk_look_back=encoder_chunk_look_back, decoder_chunk_look_back=decoder_chunk_look_back)
|
128 |
print(res)
|
129 |
```
|
130 |
+
Note: `chunk_size` is the configuration for streaming latency.` [0,10,5]` indicates that the real-time display granularity is `10*60=600ms`, and the lookahead information is `5*60=300ms`. Each inference input is `600ms` (sample points are `16000*0.6=960`), and the output is the corresponding text. For the last speech segment input, `is_final=True` needs to be set to force the output of the last word.
|
131 |
|
132 |
+
### Voice Activity Detection (Non-Streaming)
|
|
|
|
|
133 |
```python
|
134 |
from funasr import AutoModel
|
135 |
|
136 |
model = AutoModel(model="fsmn-vad", model_revision="v2.0.4")
|
|
|
137 |
wav_file = f"{model.model_path}/example/asr_example.wav"
|
138 |
res = model.generate(input=wav_file)
|
139 |
print(res)
|
140 |
```
|
141 |
+
### Voice Activity Detection (Streaming)
|
|
|
142 |
```python
|
143 |
from funasr import AutoModel
|
144 |
|
|
|
160 |
if len(res[0]["value"]):
|
161 |
print(res)
|
162 |
```
|
163 |
+
### Punctuation Restoration
|
|
|
164 |
```python
|
165 |
from funasr import AutoModel
|
166 |
|
167 |
model = AutoModel(model="ct-punc", model_revision="v2.0.4")
|
|
|
168 |
res = model.generate(input="那今天的会就到这里吧 happy new year 明年见")
|
169 |
print(res)
|
170 |
```
|
171 |
+
### Timestamp Prediction
|
|
|
172 |
```python
|
173 |
from funasr import AutoModel
|
174 |
|
175 |
model = AutoModel(model="fa-zh", model_revision="v2.0.4")
|
|
|
176 |
wav_file = f"{model.model_path}/example/asr_example.wav"
|
177 |
text_file = f"{model.model_path}/example/text.txt"
|
178 |
res = model.generate(input=(wav_file, text_file), data_type=("sound", "text"))
|
179 |
print(res)
|
180 |
```
|
181 |
|
182 |
+
More examples ref to [docs](https://github.com/alibaba-damo-academy/FunASR/tree/main/examples/industrial_data_pretraining)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|