GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance
Abstract
GuidedQuant integrates gradient information into the quantization objective to improve quantized large language models while maintaining cross-weight dependencies, enhancing performance across various quantization methods.
Post-training quantization is a key technique for reducing the memory and inference latency of large language models by quantizing weights and activations without requiring retraining. However, existing methods either (1) fail to account for the varying importance of hidden features to the end loss or, when incorporating end loss, (2) neglect the critical interactions between model weights. To address these limitations, we propose GuidedQuant, a novel quantization approach that integrates gradient information from the end loss into the quantization objective while preserving cross-weight dependencies within output channels. GuidedQuant consistently boosts the performance of state-of-the-art quantization methods across weight-only scalar, weight-only vector, and weight-and-activation quantization. Additionally, we introduce a novel non-uniform scalar quantization algorithm, which is guaranteed to monotonically decrease the quantization objective value, and outperforms existing methods in this category. We release the code at https://github.com/snu-mllab/GuidedQuant.
Community
Hello everyone, I am the first-author of the paper, and I'm excited to share our latest paper with the Hugging Face community, specially as a long-time supporter of open-source research.
๐ Project page: https://jusjinuk.me/blog/guidedquant/
๐ป Code: https://github.com/snu-mllab/GuidedQuant
๐ค Hugging Face Collection: https://huggingface.co/collections/jusjinuk/instruction-tuned-models-guidedquant-68334269c44cd3eb21f7bd61
We hope you find the work interesting and insightful. Looking forward to your feedback!
Thank you!
Models citing this paper 23
Browse 23 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper