Memory-Efficient Visual Autoregressive Modeling with Scale-Aware KV Cache Compression
Abstract
ScaleKV compresses the KV cache in Visual Autoregressive models by differentiating drafters and refiners across transformer layers, reducing memory consumption while maintaining high fidelity.
Visual Autoregressive (VAR) modeling has garnered significant attention for its innovative next-scale prediction approach, which yields substantial improvements in efficiency, scalability, and zero-shot generalization. Nevertheless, the coarse-to-fine methodology inherent in VAR results in exponential growth of the KV cache during inference, causing considerable memory consumption and computational redundancy. To address these bottlenecks, we introduce ScaleKV, a novel KV cache compression framework tailored for VAR architectures. ScaleKV leverages two critical observations: varying cache demands across transformer layers and distinct attention patterns at different scales. Based on these insights, ScaleKV categorizes transformer layers into two functional groups: drafters and refiners. Drafters exhibit dispersed attention across multiple scales, thereby requiring greater cache capacity. Conversely, refiners focus attention on the current token map to process local details, consequently necessitating substantially reduced cache capacity. ScaleKV optimizes the multi-scale inference pipeline by identifying scale-specific drafters and refiners, facilitating differentiated cache management tailored to each scale. Evaluation on the state-of-the-art text-to-image VAR model family, Infinity, demonstrates that our approach effectively reduces the required KV cache memory to 10% while preserving pixel-level fidelity.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Head-Aware KV Cache Compression for Efficient Visual Autoregressive Modeling (2025)
- FastVAR: Linear Visual Autoregressive Modeling via Cached Token Pruning (2025)
- AirCache: Activating Inter-modal Relevancy KV Cache Compression for Efficient Large Vision-Language Model Inference (2025)
- SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching (2025)
- Fine-Tuning Visual Autoregressive Models for Subject-Driven Generation (2025)
- Efficient Pretraining Length Scaling (2025)
- Sparse Attention Remapping with Clustering for Efficient LLM Decoding on PIM (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper