Papers
arxiv:2507.08771

BlockFFN: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity

Published on Jul 11
· Submitted by Raincleared on Jul 14
Authors:
,
,
,
,
,
,
,

Abstract

To alleviate the computational burden of large language models (LLMs), architectures with activation sparsity, represented by mixture-of-experts (MoE), have attracted increasing attention. However, the non-differentiable and inflexible routing of vanilla MoE hurts model performance. Moreover, while each token activates only a few parameters, these sparsely-activated architectures exhibit low chunk-level sparsity, indicating that the union of multiple consecutive tokens activates a large ratio of parameters. Such a sparsity pattern is unfriendly for acceleration under low-resource conditions (e.g., end-side devices) and incompatible with mainstream acceleration techniques (e.g., speculative decoding). To address these challenges, we introduce a novel MoE architecture, BlockFFN, as well as its efficient training and deployment techniques. Specifically, we use a router integrating ReLU activation and RMSNorm for differentiable and flexible routing. Next, to promote both token-level sparsity (TLS) and chunk-level sparsity (CLS), CLS-aware training objectives are designed, making BlockFFN more acceleration-friendly. Finally, we implement efficient acceleration kernels, combining activation sparsity and speculative decoding for the first time. The experimental results demonstrate the superior performance of BlockFFN over other MoE baselines, achieving over 80% TLS and 70% 8-token CLS. Our kernels achieve up to 3.67times speedup on real end-side devices than dense models. All codes and checkpoints are available publicly (https://github.com/thunlp/BlockFFN).

Community

Paper submitter

In this paper, we mainly address two challenges faced by existing MoE architectures:

  1. Performance compromise caused by imperfect routing, especially the non-differentiability and inflexibility issue of vanilla routing paradigms;
  2. Acceleration unfriendliness caused by low chunk-level sparsity (CLS), especially under the conditions where multiple tokens are processed simultaneously, such as offloading and speculative decoding.

To address the above challenges, we introduce BlockFFN, a novel MoE architecture, as well
as its training techniques and efficient end-side deployment.

  1. For model architectures, we propose BlockFFN, a novel MoE paradigm that minimizes
    performance compromise through the router module, incorporating ReLU activation and RMSNorm. Through experiments, we demonstrate its better performance compared to other MoE baselines such as TopK, DeepSeekMoE, GRIN, and ReMoE.
  2. For training techniques, we introduce CLS-aware training objectives to improve the CLS
    of BlockFFN as well as the vanilla token-level sparsity (TLS). In experiments, we obtain average TLS values higher than 80% and 8-token CLS values higher than 70%.
  3. For end-side deployment, we implement efficient acceleration kernels for BlockFFN, combining activation sparsity and speculative decoding for the first time. On NVIDIA Jetson Orin NX, the kernel achieves an acceleration ratio of 3.67x, compared to the baseline auto-regressive (AR) decoding.

Sign up or log in to comment

Models citing this paper 6

Browse 6 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.08771 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.08771 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.