Safetensors
tts
vc
svs
svc
music
RMSnow commited on
Commit
f0b3567
Β·
verified Β·
1 Parent(s): 0b3b3d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +146 -3
README.md CHANGED
@@ -1,3 +1,146 @@
1
- ---
2
- license: cc-by-nc-nd-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-nd-4.0
3
+ datasets:
4
+ - amphion/Emilia-Dataset
5
+ language:
6
+ - en
7
+ - zh
8
+ - ja
9
+ - ko
10
+ - de
11
+ - fr
12
+ tags:
13
+ - tts
14
+ - vc
15
+ - svs
16
+ - svc
17
+ - music
18
+ ---
19
+
20
+ # Vevo1.5
21
+
22
+ [![blog](https://img.shields.io/badge/Vevo1.5-Blog-blue.svg)](https://veiled-army-9c5.notion.site/Vevo1-5-1d2ce17b49a280b5b444d3fa2300c93a)
23
+ [![arXiv](https://img.shields.io/badge/Vevo-Paper-COLOR.svg)](https://openreview.net/pdf?id=anQDiQZhDP)
24
+ [![hf](https://img.shields.io/badge/%F0%9F%A4%97%20HuggingFace-model-yellow)](https://huggingface.co/amphion/VevoSing)
25
+
26
+ We present **Vevo1.5**, a versatile zero-shot voice imitation framework capable of modeling both speech and singing voices. This framework offers two key features:
27
+
28
+ 1. Unified speech and singing voice modeling.
29
+ 2. Fine-grained control over multiple voice attributes, including text, melody, style, and melody.
30
+
31
+ For a hands-on demonstration of Vevo1.5's capabilities, we invite readers to explore [our accompanying blog post](https://veiled-army-9c5.notion.site/Vevo1-5-1d2ce17b49a280b5b444d3fa2300c93a).
32
+
33
+ ## Pre-trained Models
34
+
35
+ We have included the following pre-trained models at Amphion:
36
+
37
+ | Model | Description | Pre-trained Data and Checkpoint |
38
+ | ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
39
+ | **Prosody Tokenizer** | Converting speech/singing waveform to **coarse-grained prosody tokens** (which is also be interpreted as *melody contour* from a musical perspective). It is a single codebook VQ-VAE with a vocabulary size of 512. The frame rate is 6.25 Hz. (i.e., **56.25 bps**) | [πŸ€— Emilia-101k, Sing-0.4k](https://huggingface.co/amphion/Vevo1.5/tree/main/tokenizer/prosody_fvq512_6.25hz) |
40
+ | **Content-Style Tokenizer** | Converting speech/singing waveform to **fine-grained content-style tokens**. It is a single codebook VQ-VAE with a vocabulary size of 16384. The frame rate is 12.5 Hz. (i.e., **175 bps**) | [πŸ€— Emilia-101k, Sing-0.4k](https://huggingface.co/amphion/Vevo1.5/tree/main/tokenizer/contentstyle_fvq16384_12.5hz) |
41
+ | **Auto-regressive Transformer** | Predicting content-style tokens from phone tokens (and optionally, prosody tokens) with an auto-regressive transformer (780M). | [πŸ€— Emilia-101k, Sing-0.4k](https://huggingface.co/amphion/Vevo1.5/tree/main/contentstyle_modeling/ar_emilia101k_sing0.4k) <br>[πŸ€— Emilia-101k, SingNet-7k](https://huggingface.co/amphion/Vevo1.5/tree/main/contentstyle_modeling/ar_emilia101k_singnet7k) |
42
+ | **Flow-matching Transformer** | Predicting mel-spectrogram from content-style tokens with a flow-matching transformer (350M). | [πŸ€— Emilia-101k, Sing-0.4k](https://huggingface.co/amphion/Vevo1.5/tree/main/acoustic_modeling/fm_emilia101k_sing0.4k) <br> [πŸ€— Emilia-101k, SingNet-7k](https://huggingface.co/amphion/Vevo1.5/tree/main/acoustic_modeling/fm_emilia101k_singnet7k) |
43
+ | **Vocoder** | Predicting audio from mel-spectrogram with a Vocos-based vocoder (250M). | [πŸ€— Emilia-101k](https://huggingface.co/amphion/Vevo/tree/main/acoustic_modeling/Vocoder) <br>[πŸ€— Emilia-101k, SingNet-3k](https://huggingface.co/amphion/Vevo1.5/tree/main/acoustic_modeling/Vocoder) |
44
+
45
+ The training data includes:
46
+
47
+ - **Emilia-101k**: about 101k hours of speech data
48
+
49
+ - **Sing-0.4k**: about 400 hours of open-source singing voice data as follows:
50
+
51
+ | Dataset Name | \#Hours |
52
+ | ------------ | --------- |
53
+ | ACESinger | 320.6 |
54
+ | OpenSinger | 45.7 |
55
+ | M4Singer | 28.4 |
56
+ | Popbutfy | 23.8 |
57
+ | PopCS | 11.5 |
58
+ | Opencpop | 5.1 |
59
+ | CSD | 3.8 |
60
+ | **Total** | **438.9** |
61
+
62
+ - **SingNet-7k**: about 7,000 hours of internal singing voice data, preprocessed using the [SingNet pipeline](https://openreview.net/pdf?id=X6ffdf6nh3). The SingNet-3k is a 3000-hour subset of SingNet-7k.
63
+
64
+ ## Quickstart (Inference Only)
65
+
66
+ To infer with Vevo1.5, you need to follow the steps below:
67
+
68
+ 1. Clone the repository and install the environment.
69
+ 2. Run the inference script.
70
+
71
+ > **Note:** Same environment requirement as MaskGCT/Vevo.
72
+
73
+ ### Clone and Environment Setup
74
+
75
+ #### 1. Clone the repository
76
+
77
+ ```bash
78
+ git clone https://github.com/open-mmlab/Amphion.git
79
+ cd Amphion
80
+ ```
81
+
82
+ #### 2. Install the environment
83
+
84
+ Before start installing, making sure you are under the `Amphion` directory. If not, use `cd` to enter.
85
+
86
+ Since we use `phonemizer` to convert text to phoneme, you need to install `espeak-ng` first. More details can be found [here](https://bootphon.github.io/phonemizer/install.html). Choose the correct installation command according to your operating system:
87
+
88
+ ```bash
89
+ # For Debian-like distribution (e.g. Ubuntu, Mint, etc.)
90
+ sudo apt-get install espeak-ng
91
+ # For RedHat-like distribution (e.g. CentOS, Fedora, etc.)
92
+ sudo yum install espeak-ng
93
+ ```
94
+
95
+ Now, we are going to install the environment. It is recommended to use conda to configure:
96
+
97
+ ```bash
98
+ conda create -n vevo python=3.10
99
+ conda activate vevo
100
+
101
+ pip install -r models/vc/vevo/requirements.txt
102
+ ```
103
+
104
+ ### Inference Script
105
+
106
+ ```sh
107
+ # FM model only (i.e., timbre control. Usually for VC and SVC)
108
+ python -m models.svc.vevosing.infer_vevosing_fm
109
+
110
+ # AR + FM (i.e., text, prosody, and style control)
111
+ python -m models.svc.vevosing.infer_vevosing_ar
112
+ ```
113
+
114
+ Running this will automatically download the pretrained model from HuggingFace and start the inference process. The generated audios are saved in `models/svc/vevosing/output/*.wav` by default.
115
+
116
+
117
+ ## Citations
118
+
119
+ If you find this work useful for your research, please cite our paper:
120
+ ```bibtex
121
+ @inproceedings{vevo,
122
+ author = {Xueyao Zhang and Xiaohui Zhang and Kainan Peng and Zhenyu Tang and Vimal Manohar and Yingru Liu and Jeff Hwang and Dangna Li and Yuhao Wang and Julian Chan and Yuan Huang and Zhizheng Wu and Mingbo Ma},
123
+ title = {Vevo: Controllable Zero-Shot Voice Imitation with Self-Supervised Disentanglement},
124
+ booktitle = {{ICLR}},
125
+ publisher = {OpenReview.net},
126
+ year = {2025}
127
+ }
128
+ ```
129
+
130
+ If you use the Vevo1.5 pre-trained models or training recipe of Amphion, please also cite:
131
+
132
+ ```bibtex
133
+ @article{amphion2,
134
+ title = {Overview of the Amphion Toolkit (v0.2)},
135
+ author = {Jiaqi Li and Xueyao Zhang and Yuancheng Wang and Haorui He and Chaoren Wang and Li Wang and Huan Liao and Junyi Ao and Zeyu Xie and Yiqiao Huang and Junan Zhang and Zhizheng Wu},
136
+ year = {2025},
137
+ journal = {arXiv preprint arXiv:2501.15442},
138
+ }
139
+
140
+ @inproceedings{amphion,
141
+ author={Xueyao Zhang and Liumeng Xue and Yicheng Gu and Yuancheng Wang and Jiaqi Li and Haorui He and Chaoren Wang and Ting Song and Xi Chen and Zihao Fang and Haopeng Chen and Junan Zhang and Tze Ying Tang and Lexiao Zou and Mingxuan Wang and Jun Han and Kai Chen and Haizhou Li and Zhizheng Wu},
142
+ title={Amphion: An Open-Source Audio, Music and Speech Generation Toolkit},
143
+ booktitle={{IEEE} Spoken Language Technology Workshop, {SLT} 2024},
144
+ year={2024}
145
+ }
146
+ ```