Jakiro: Boosting Speculative Decoding with Decoupled Multi-Head via MoE
Abstract
Speculative decoding (SD) accelerates large language model inference by using a smaller draft model to predict multiple tokens, which are then verified in parallel by the larger target model. However, the limited capacity of the draft model often necessitates tree-based sampling to improve prediction accuracy, where multiple candidates are generated at each step. We identify a key limitation in this approach: the candidates at the same step are derived from the same representation, limiting diversity and reducing overall effectiveness. To address this, we propose Jakiro, leveraging Mixture of Experts (MoE), where independent experts generate diverse predictions, effectively decoupling correlations among candidates. Furthermore, we introduce a hybrid inference strategy, combining autoregressive decoding for initial tokens with parallel decoding for subsequent stages, and enhance the latter with contrastive mechanism in features to improve accuracy. Our method significantly boosts prediction accuracy and achieves higher inference speedups. Extensive experiments across diverse models validate the effectiveness and robustness of our approach, establishing a new SOTA in speculative decoding. Our codes are available at https://github.com/haiduo/Jakiro.
Community
We are thrilled to introduce Jakiro, an innovative approach that significantly enhances speculative decoding (SD) for large language models (LLMs). By leveraging the power of Mixture of Experts (MoE), Jakiro enables diverse predictions from independent experts, effectively addressing a key limitation of traditional tree-based sampling methods.
Key Highlights:
State-of-the-Art Performance: Jakiro achieves unprecedented improvements in prediction accuracy and inference speed.
Universal Compatibility: Works seamlessly with any LLM, including popular models like GPT-4, DeepSpeed, and more.
Room for Optimization: Current results are achieved without additional acceleration techniques (e.g., FlashAttention, VLLM), leaving ample room for further enhancements.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Speculative Ensemble: Fast Large Language Model Ensemble via Speculation (2025)
- Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment (2025)
- AdaEAGLE: Optimizing Speculative Decoding via Explicit Modeling of Adaptive Draft Structures (2024)
- Speeding up Speculative Decoding via Approximate Verification (2025)
- Reward-Guided Speculative Decoding for Efficient LLM Reasoning (2025)
- Falcon: Faster and Parallel Inference of Large Language Models through Enhanced Semi-Autoregressive Drafting and Custom-Designed Decoding Tree (2024)
- Dovetail: A CPU/GPU Heterogeneous Speculative Decoding for LLM inference (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper