GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers Paper • 2210.17323 • Published Oct 31, 2022 • 8
QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models Paper • 2310.16795 • Published Oct 25, 2023 • 27
Scaling Laws for Sparsely-Connected Foundation Models Paper • 2309.08520 • Published Sep 15, 2023 • 13