Improve model card: Add pipeline tag, library name, and enrich content

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +100 -69
README.md CHANGED
@@ -1,69 +1,100 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
-
5
- # **Introduction**
6
-
7
- **`XY-Tokenizer`** is a speech codec that simultaneously models both semantic and acoustic aspects of speech, converting audio into discrete tokens and decoding them back to high-quality audio. It achieves efficient speech representation at only 1kbps with RVQ8 quantization at 12.5Hz frame rate.
8
-
9
- - **Paper:** [Read on arXiv](https://arxiv.org/abs/2506.23325)
10
- - **Source Code:**
11
- - [GitHub Repo](https://github.com/OpenMOSS/MOSS-TTSD/tree/main/XY_Tokenizer)
12
- - [Hugging Face Repo](https://huggingface.co/spaces/fnlp/MOSS-TTSD/tree/main/XY_Tokenizer)
13
-
14
- ## 📚 Related Project: **[MOSS-TTSD](https://huggingface.co/fnlp/MOSS-TTSD-v0.5)**
15
-
16
- **`XY-Tokenizer`** serves as the underlying neural codec for **`MOSS-TTSD`**, our 1.7B Audio Language Model. \
17
- Explore **`MOSS-TTSD`** for advanced text-to-speech and other audio generation tasks on [GitHub](https://github.com/OpenMOSS/MOSS-TTSD), [Blog](http://www.open-moss.com/en/moss-ttsd/), [博客](https://www.open-moss.com/cn/moss-ttsd/), and [Space Demo](https://huggingface.co/spaces/fnlp/MOSS-TTSD).
18
-
19
- ## Features
20
-
21
- - **Dual-channel modeling**: Simultaneously captures semantic meaning and acoustic details
22
- - **Efficient representation**: 1kbps bitrate with RVQ8 quantization at 12.5Hz
23
- - **High-quality audio tokenization**: Convert speech to discrete tokens and back with minimal quality loss
24
- - **Long audio support**: Process audio files longer than 30 seconds using chunking with overlap
25
- - **Batch processing**: Efficiently process multiple audio files in batches
26
- - **24kHz output**: Generate high-quality 24kHz audio output
27
-
28
-
29
- ## 🚀 Installation
30
-
31
- ```bash
32
- git clone https://github.com/OpenMOSS/MOSS-TTSD.git
33
- cd MOSS-TTSD
34
- conda create -n xy_tokenizer python=3.10 -y && conda activate xy_tokenizer
35
- pip install -r XY_Tokenizer/requirements.txt
36
- ```
37
-
38
- ## 💻 Quick Start
39
-
40
- Here's how to use **`XY-Tokenizer`** with `transformers` to encode an audio file into discrete tokens and decode it back into a waveform.
41
-
42
- ```python
43
- import torchaudio
44
- from transformers import AutoFeatureExtractor, AutoModel
45
-
46
- # 1. Load the feature extractor and the codec model
47
- feature_extractor = AutoFeatureExtractor.from_pretrained("MCplayer/XY_Tokenizer", trust_remote_code=True)
48
- codec = AutoModel.from_pretrained("MCplayer/XY_Tokenizer", trust_remote_code=True, device_map="auto").eval()
49
-
50
- # 2. Load and preprocess the audio
51
- # The model expects a 16kHz sample rate.
52
- wav_form, sampling_rate = torchaudio.load("examples/zh_spk1_moon.wav")
53
- if sampling_rate != 16000:
54
- wav_form = torchaudio.functional.resample(wav_form, orig_freq=sampling_rate, new_freq=16000)
55
-
56
- # 3. Encode the audio into discrete codes
57
- input_spectrum = feature_extractor(wav_form, sampling_rate=16000, return_attention_mask=True, return_tensors="pt")
58
- # The 'code' dictionary contains the discrete audio codes
59
- code = codec.encode(input_spectrum)
60
-
61
- # 4. Decode the codes back to an audio waveform
62
- # The output is high-quality 24kHz audio.
63
- output_wav = codec.decode(code["audio_codes"], overlap_seconds=10)
64
-
65
- # 5. Save the reconstructed audio
66
- for i, audio in enumerate(output_wav["audio_values"]):
67
- torchaudio.save(f"outputs/audio_{i}.wav", audio.cpu(), 24000)
68
-
69
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: audio-to-audio
4
+ library_name: transformers
5
+ ---
6
+
7
+ # XY-Tokenizer: Mitigating the Semantic-Acoustic Conflict in Low-Bitrate Speech Codecs
8
+
9
+ ## Introduction
10
+
11
+ **`XY-Tokenizer`** is a speech codec that simultaneously models both semantic and acoustic aspects of speech, converting audio into discrete tokens and decoding them back to high-quality audio. It achieves efficient speech representation at only 1kbps with RVQ8 quantization at 12.5Hz frame rate.
12
+
13
+ - **Paper:** [Read on arXiv](https://arxiv.org/abs/2506.23325)
14
+ - **Source Code:** [GitHub Repo](https://github.com/OpenMOSS/MOSS-TTSD)
15
+
16
+ ## 📚 Related Project: **[MOSS-TTSD](https://huggingface.co/fnlp/MOSS-TTSD-v0.5)**
17
+
18
+ **`XY-Tokenizer`** serves as the underlying neural codec for **`MOSS-TTSD`**, our 1.7B Audio Language Model. \
19
+ Explore **`MOSS-TTSD`** for advanced text-to-speech and other audio generation tasks on [GitHub](https://github.com/OpenMOSS/MOSS-TTSD), [Blog](http://www.open-moss.com/en/moss-ttsd/), [博客](https://www.open-moss.com/cn/moss-ttsd/), and [Space Demo](https://huggingface.co/spaces/fnlp/MOSS-TTSD).
20
+
21
+ ## Features
22
+
23
+ - **Dual-channel modeling**: Simultaneously captures semantic meaning and acoustic details
24
+ - **Efficient representation**: 1kbps bitrate with RVQ8 quantization at 12.5Hz
25
+ - **High-quality audio tokenization**: Convert speech to discrete tokens and back with minimal quality loss
26
+ - **Long audio support**: Process audio files longer than 30 seconds using chunking with overlap
27
+ - **Batch processing**: Efficiently process multiple audio files in batches
28
+ - **24kHz output**: Generate high-quality 24kHz audio output
29
+
30
+
31
+ ## 🚀 Installation
32
+
33
+ ```bash
34
+ git clone https://github.com/OpenMOSS/MOSS-TTSD.git
35
+ cd MOSS-TTSD
36
+ conda create -n xy_tokenizer python=3.10 -y && conda activate xy_tokenizer
37
+ pip install -r XY_Tokenizer/requirements.txt
38
+ ```
39
+
40
+ ## 💻 Quick Start
41
+
42
+ Here's how to use **`XY-Tokenizer`** with `transformers` to encode an audio file into discrete tokens and decode it back into a waveform.
43
+
44
+ ```python
45
+ import torchaudio
46
+ from transformers import AutoFeatureExtractor, AutoModel
47
+
48
+ # 1. Load the feature extractor and the codec model
49
+ feature_extractor = AutoFeatureExtractor.from_pretrained("MCplayer/XY_Tokenizer", trust_remote_code=True)
50
+ codec = AutoModel.from_pretrained("MCplayer/XY_Tokenizer", trust_remote_code=True, device_map="auto").eval()
51
+
52
+ # 2. Load and preprocess the audio
53
+ # The model expects a 16kHz sample rate.
54
+ wav_form, sampling_rate = torchaudio.load("examples/zh_spk1_moon.wav")
55
+ if sampling_rate != 16000:
56
+ wav_form = torchaudio.functional.resample(wav_form, orig_freq=sampling_rate, new_freq=16000)
57
+
58
+ # 3. Encode the audio into discrete codes
59
+ input_spectrum = feature_extractor(wav_form, sampling_rate=16000, return_attention_mask=True, return_tensors="pt")
60
+ # The 'code' dictionary contains the discrete audio codes
61
+ code = codec.encode(input_spectrum)
62
+
63
+ # 4. Decode the codes back to an audio waveform
64
+ # The output is high-quality 24kHz audio.
65
+ output_wav = codec.decode(code["audio_codes"], overlap_seconds=10)
66
+
67
+ # 5. Save the reconstructed audio
68
+ for i, audio in enumerate(output_wav["audio_values"]):
69
+ torchaudio.save(f"outputs/audio_{i}.wav", audio.cpu(), 24000)
70
+
71
+ ```
72
+
73
+ ## Available Models 🗂️
74
+
75
+ | Model Name | Hugging Face | Training Data |
76
+ |:----------:|:-------------:|:---------------:|
77
+ | XY-Tokenizer | [🤗](https://huggingface.co/fdugyt/XY_Tokenizer) | Emilia |
78
+ | XY-Tokenizer-TTSD-V0 (used in [MOSS-TTSD](https://github.com/OpenMOSS/MOSS-TTSD)) | [🤗](https://huggingface.co/fnlp/XY_Tokenizer_TTSD_V0/) | Emilia + Internal Data (containing general audio) |
79
+
80
+ ## Demos 🎮
81
+
82
+ See our blog for more demos at [Blog](http://www.open-moss.com/en/moss-ttsd/)
83
+
84
+ ## License 📜
85
+
86
+ XY-Tokenizer is released under the Apache 2.0 license.
87
+
88
+ ## Citation 📚
89
+
90
+ ```
91
+ @misc{gong2025xytokenizermitigatingsemanticacousticconflict,
92
+ title={XY-Tokenizer: Mitigating the Semantic-Acoustic Conflict in Low-Bitrate Speech Codecs},
93
+ author={Yitian Gong and Luozhijie Jin and Ruifan Deng and Dong Zhang and Xin Zhang and Qinyuan Cheng and Zhaoye Fei and Shimin Li and Xipeng Qiu},
94
+ year={2025},
95
+ eprint={2506.23325},
96
+ archivePrefix={arXiv},
97
+ primaryClass={cs.SD},
98
+ url={https://arxiv.org/abs/2506.23325},
99
+ }
100
+ ```