MixLLM: LLM Quantization with Global Mixed-precision between Output-features and Highly-efficient System Design
Abstract
Quantization has become one of the most effective methodologies to compress LLMs into smaller size. However, the existing quantization solutions still show limitations of either non-negligible accuracy drop or system inefficiency. In this paper, we make a comprehensive analysis of the general quantization principles on their effect to the triangle of accuracy, memory consumption and system efficiency. We propose MixLLM that explores the new optimization space of mixed-precision quantization between output features based on the insight that different output features matter differently in the model. MixLLM identifies the output features with high salience in the global view rather than within each single layer, effectively assigning the larger bit-width to output features that need it most to achieve good accuracy with low memory consumption. We present the sweet spot of quantization configuration of algorithm-system co-design that leads to high accuracy and system efficiency. To address the system challenge, we design the two-step dequantization to make use of the int8 Tensor Core easily and fast data type conversion to reduce dequantization overhead significantly, and present the software pipeline to overlap the memory access, dequantization and the MatMul to the best. Extensive experiments show that with only 10% more bits, the PPL increasement can be reduced from about 0.5 in SOTA to within 0.2 for Llama 3.1 70B, while on average MMLU-Pro improves by 0.93 over the SOTA of three popular models. In addition to its superior accuracy, MixLLM also achieves state-of-the-art system efficiency.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MixPE: Quantization and Hardware Co-design for Efficient LLM Inference (2024)
- BitMoD: Bit-serial Mixture-of-Datatype LLM Acceleration (2024)
- AutoMixQ: Self-Adjusting Quantization for High Performance Memory-Efficient Fine-Tuning (2024)
- DQA: An Efficient Method for Deep Quantization of Deep Neural Network Activations (2024)
- FP=xINT:A Low-Bit Series Expansion Algorithm for Post-Training Quantization (2024)
- ResQ: Mixed-Precision Quantization of Large Language Models with Low-Rank Residuals (2024)
- GAQAT: gradient-adaptive quantization-aware training for domain generalization (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
We are still in the process of refactoring the code and integrating it into the end-to-end serving system, both the optimized quantization procedure and the efficient linear kernels. It will be released soon. Thanks.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper