Papers
arxiv:2505.07004

GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance

Published on May 11
Authors:
,
,
,
,
,

Abstract

GuidedQuant integrates gradient information into the quantization objective to improve quantized large language models while maintaining cross-weight dependencies, enhancing performance across various quantization methods.

AI-generated summary

Post-training quantization is a key technique for reducing the memory and inference latency of large language models by quantizing weights and activations without requiring retraining. However, existing methods either (1) fail to account for the varying importance of hidden features to the end loss or, when incorporating end loss, (2) neglect the critical interactions between model weights. To address these limitations, we propose GuidedQuant, a novel quantization approach that integrates gradient information from the end loss into the quantization objective while preserving cross-weight dependencies within output channels. GuidedQuant consistently boosts the performance of state-of-the-art quantization methods across weight-only scalar, weight-only vector, and weight-and-activation quantization. Additionally, we introduce a novel non-uniform scalar quantization algorithm, which is guaranteed to monotonically decrease the quantization objective value, and outperforms existing methods in this category. We release the code at https://github.com/snu-mllab/GuidedQuant.

Community

Paper author

Hello everyone, I am the first-author of the paper, and I'm excited to share our latest paper with the Hugging Face community, specially as a long-time supporter of open-source research.

We hope you find the work interesting and insightful. Looking forward to your feedback!

Thank you!

Sign up or log in to comment

Models citing this paper 23

Browse 23 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.07004 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.07004 in a Space README.md to link it from this page.

Collections including this paper 1