Training dynamic models using early exits for automatic speech recognition on resource-constrained devices Paper • 2309.09546 • Published Sep 18, 2023 • 1
Sequence-Level Knowledge Distillation for Class-Incremental End-to-End Spoken Language Understanding Paper • 2305.13899 • Published May 23, 2023
Mixtures of Deep Neural Experts for Automated Speech Scoring Paper • 2106.12475 • Published Jun 23, 2021 • 2
Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters Paper • 2402.00828 • Published Feb 1, 2024
An Investigation of the Combination of Rehearsal and Knowledge Distillation in Continual Learning for Spoken Language Understanding Paper • 2211.08161 • Published Nov 15, 2022
Large Language Models Are Strong Audio-Visual Speech Recognition Learners Paper • 2409.12319 • Published Sep 18, 2024
Scaling strategies for on-device low-complexity source separation with Conv-Tasnet Paper • 2303.03005 • Published Mar 6, 2023
Scaling and Enhancing LLM-based AVSR: A Sparse Mixture of Projectors Approach Paper • 2505.14336 • Published May 20 • 3
Splitformer: An improved early-exit architecture for automatic speech recognition on edge devices Paper • 2506.18035 • Published Jun 22