DeCRED: Decoder-Centric Regularization for Encoder-Decoder Based Speech Recognition
Abstract
Decoder-Centric Regularization in Encoder-Decoder (DeCRED) improves ASR robustness and generalization by adding auxiliary classifiers to the decoder, reducing internal language model perplexity and WER.
This paper presents a simple yet effective regularization for the internal language model induced by the decoder in encoder-decoder ASR models, thereby improving robustness and generalization in both in- and out-of-domain settings. The proposed method, Decoder-Centric Regularization in Encoder-Decoder (DeCRED), adds auxiliary classifiers to the decoder, enabling next token prediction via intermediate logits. Empirically, DeCRED reduces the mean internal LM BPE perplexity by 36.6% relative to 11 test sets. Furthermore, this translates into actual WER improvements over the baseline in 5 of 7 in-domain and 3 of 4 out-of-domain test sets, reducing macro WER from 6.4% to 6.3% and 18.2% to 16.2%, respectively. On TEDLIUM3, DeCRED achieves 7.0% WER, surpassing the baseline and encoder-centric InterCTC regularization by 0.6% and 0.5%, respectively. Finally, we compare DeCRED with OWSM v3.1 and Whisper-medium, showing competitive WERs despite training on much less data with fewer parameters.
Community
Accepted at IEEE ASRU 2025
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Boosting CTC-Based ASR Using LLM-Based Intermediate Loss Regularization (2025)
- Enhanced Hybrid Transducer and Attention Encoder Decoder with Text Data (2025)
- SHNU Multilingual Conversational Speech Recognition System for INTERSPEECH 2025 MLC-SLM Challenge (2025)
- Splitformer: An improved early-exit architecture for automatic speech recognition on edge devices (2025)
- Whisfusion: Parallel ASR Decoding via a Diffusion Transformer (2025)
- AsyncSwitch: Asynchronous Text-Speech Adaptation for Code-Switched ASR (2025)
- Self-Improvement for Audio Large Language Model using Unlabeled Speech (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper