zhifeixie commited on
Commit
daebb2e
·
verified ·
1 Parent(s): 6155cbb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +186 -3
README.md CHANGED
@@ -1,3 +1,186 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+
6
+ # Audio-Reasoner
7
+ <p align="center">
8
+ <img src="assets\title.png" width="90%"/>
9
+ </p>
10
+
11
+ ## Abstract
12
+ We implemented inference scaling on **Audio-Reasoner**, a large audio language model, enabling **deepthink** and **structured chain-of-thought (COT) reasoning** for multimodal understanding and reasoning. To achieve this, we constructed CoTA, a high-quality dataset with **1.2M reasoning-rich samples** using structured COT techniques. Audio-Reasoner achieves state-of-the-art results on **MMAU-mini(+25.42%)** and **AIR-Bench-Chat(+14.57%)** benchmarks.
13
+
14
+ <p align="center">
15
+ Audio-Reasoner-7B <a href="https://huggingface.co/zhifeixie/Audio-Reasoner/tree/main">🤗</a> | CoTA Dataset <a href="https://huggingface.co"></a> 🤗 (coming soon)<br>
16
+ Paper <a href="https://arxiv.org/abs/2503.02318"> 📑</a> | Wechat <a href="https://github.com/xzf-thu/Audio-Reasoner/blob/main/assets/wechat.jpg">💭</a> | Code <a href="https://github.com/xzf-thu/Audio-Reasoner"> ⚙️</a>
17
+ <br>
18
+ <a href="#demo"> Demo</a> • <a href="#install">Install</a> • <a href="#quick-start">Quick Start</a> • <a href="#faq">FAQ</a> • <a href="#contact">Contact us</a><br>
19
+ <br>
20
+ If you like us, pls give us a star⭐ !
21
+ </p>
22
+
23
+
24
+
25
+ ## Main Results
26
+ <p align="center">
27
+ <img src="assets\main_result.png" width="100%"/>
28
+ </p>
29
+
30
+
31
+
32
+
33
+ ## News and Updates
34
+ - **2025.03.05:** ✅**Audio-Reasoner-7B checkpoint is released on HuggingFace<a href="https://huggingface.co/zhifeixie/Audio-Reasoner/tree/main">🤗</a> !**
35
+ - **2025.03.05:** ✅**Audio-Reasoner Paper is uploaded to arXiv<a href="https://arxiv.org/abs/2503.02318"> 📑</a>.**
36
+ - **2025.03.04:** ✅**Demos, inference code and evaluation results have been released.**
37
+ - **2025.03.04:** ✅**Create this repo.**
38
+
39
+ ## Roadmap
40
+ - **2025.03:** **🔜Upload CoTA dataset to HuggingFace🤗.**
41
+
42
+ - **2025.04:** **🔜Open-source data systhesis pipeline and training code**.
43
+
44
+ ## Demo
45
+ <p align="center" width="80%">
46
+ <video controls src="https://github.com/user-attachments/assets/d50f75e7-288b-454b-92a3-c6f058be231b" title="v" width="100%"></video>
47
+ </p>
48
+
49
+ ## Features
50
+ ✅ Audio-Reasoner enables **deep reasoning and inference scaling** in audio-based tasks, built on Qwen2-Audio-Instruct with structured CoT training.
51
+
52
+ ✅ CoTA offers **1.2M** high-quality captions and QA pairs across domains for structured reasoning and enhanced pretraining.
53
+
54
+ ✅ Pretrained model and dataset encompassing various types of audio including sound, music, and speech, has achieved state-of-the-art results across multiple benchmarks. Refer to our <a href="https://arxiv.org/abs/2503.02318">paper</a> for details.
55
+
56
+
57
+ ## Install
58
+
59
+ **Clone and install**
60
+
61
+ - Clone the repo
62
+ ``` sh
63
+ git clone https://github.com/xzf-thu/Audio-Reasoner.git
64
+
65
+ cd Audio-Reasoner
66
+ ```
67
+
68
+ - Install the required packages
69
+ ```sh
70
+ conda create -n Audio-Reasoner python=3.10
71
+ conda activate Audio-Reasoner
72
+
73
+ pip install -r requirements.txt
74
+ pip install transformers==4.49.1
75
+ ```
76
+
77
+ ## Quick Start
78
+
79
+ **Chat using ms-swift**
80
+ ```sh
81
+ import os
82
+ import re
83
+ from typing import List, Literal
84
+ from swift.llm import InferEngine, InferRequest, PtEngine, RequestConfig, load_dataset, get_template
85
+ from swift.plugin import InferStats
86
+
87
+
88
+ def infer_stream(engine: 'InferEngine', infer_request: 'InferRequest'):
89
+ request_config = RequestConfig(max_tokens=2048, temperature=0, stream=True)
90
+ metric = InferStats()
91
+ gen = engine.infer([infer_request], request_config, metrics=[metric])
92
+ query = infer_request.messages[0]['content']
93
+ output = ""
94
+ print(f'query: {query}\nresponse: ', end='')
95
+ for resp_list in gen:
96
+ if resp_list[0] is None:
97
+ continue
98
+ print(resp_list[0].choices[0].delta.content, end='', flush=True)
99
+ output += resp_list[0].choices[0].delta.content
100
+ print()
101
+ print(f'metric: {metric.compute()}')
102
+ return output
103
+
104
+
105
+ def get_message(audiopath, prompt):
106
+ messages = [
107
+ {"role": "system", "content": system},
108
+ {
109
+ 'role':
110
+ 'user',
111
+ 'content': [{
112
+ 'type': 'audio',
113
+ 'audio': audiopath
114
+ }, {
115
+ 'type': 'text',
116
+ 'text': prompt
117
+ }]
118
+ }]
119
+ return messages
120
+
121
+ system = 'You are an audio deep-thinking model. Upon receiving a question, please respond in two parts: <THINK> and <RESPONSE>. The <THINK> section should be further divided into four parts: <PLANNING>, <CAPTION>, <REASONING>, and <SUMMARY>.'
122
+ infer_backend = 'pt'
123
+ model = 'qwen2_audio'
124
+ last_model_checkpoint = "" #Please replace it with the path to checkpoint
125
+ engine = PtEngine(last_model_checkpoint, max_batch_size=64, model_type = model)
126
+
127
+ def audioreasoner_gen(audiopath, prompt):
128
+ return infer_stream(engine, InferRequest(messages=get_message(audiopath, prompt)))
129
+
130
+ def main():
131
+ #Please replace it with your test aduio
132
+ audiopath = "assets/test.wav"
133
+ #Please replace it with your questions about the test aduio
134
+ prompt = "Which of the following best describes the rhythmic feel and time signature of the song?"
135
+ audioreasoner_gen(audiopath, prompt)
136
+
137
+ if __name__ == '__main__':
138
+ main()
139
+ ```
140
+
141
+ **Local test**
142
+
143
+ ```sh
144
+ conda activate Audio-Reasoner
145
+ cd Audio-Reasoner
146
+ # test run the preset audio samples and questions
147
+ python inference.py
148
+ ```
149
+
150
+ ## FAQ
151
+
152
+ **1. What kind of audio can Audio - Reasoner understand and what kind of thinking does it perform?**
153
+ Audio - Reasoner can understand various types of audio, including sound, music, and speech. It conducts in - depth thinking in four parts: **planning, caption, reasoning, and summary**.
154
+
155
+ **2. Why is transformers installed after 'ms-swift' in the environment configuration?**
156
+ The version of transformers has a significant impact on the performance of the model. We have tested that version `transformers==4.49.1` is one of the suitable versions. Installing ms-swift first may ensure a more stable environment for the subsequent installation of transformers to avoid potential version conflicts that could affect the model's performance.
157
+
158
+ ## More Cases
159
+ <p align="center">
160
+ <img src="assets\figure2-samples.png" width="90%"/>
161
+ </p>
162
+
163
+
164
+ ## Contact
165
+
166
+ If you have any questions, please feel free to contact us via `[email protected]`.
167
+
168
+ ## Citation
169
+ Please cite our paper if you find our model and detaset useful. Thanks!
170
+ ```
171
+ @misc{xie2025audioreasonerimprovingreasoningcapability,
172
+ title={Audio-Reasoner: Improving Reasoning Capability in Large Audio Language Models},
173
+ author={Zhifei Xie and Mingbao Lin and Zihang Liu and Pengcheng Wu and Shuicheng Yan and Chunyan Miao},
174
+ year={2025},
175
+ eprint={2503.02318},
176
+ archivePrefix={arXiv},
177
+ primaryClass={cs.SD},
178
+ url={https://arxiv.org/abs/2503.02318},
179
+ }
180
+ ```
181
+
182
+
183
+
184
+ ## Star History
185
+
186
+ [![Star History Chart](https://api.star-history.com/svg?repos=xzf-thu/Audio-Reasoner&type=Date)]([https://star-history.com/#xzf-thu/Audio-Reasoner&Date](https://star-history.com/#xzf-thu/Audio-Reasoner&Timeline))