Papers
arxiv:2410.07524

Upcycling Large Language Models into Mixture of Experts

Published on Oct 10
Authors:
,
,
,
,
,
,
,
,

Abstract

Upcycling pre-trained dense language models into sparse mixture-of-experts (MoE) models is an efficient approach to increase the model capacity of already trained models. However, optimal techniques for upcycling at scale remain unclear. In this work, we conduct an extensive study of upcycling methods and hyperparameters for billion-parameter scale language models. We propose a novel "virtual group" initialization scheme and weight scaling approach to enable upcycling into fine-grained MoE architectures. Through ablations, we find that upcycling outperforms continued dense model training. In addition, we show that softmax-then-topK expert routing improves over topK-then-softmax approach and higher granularity MoEs can help improve accuracy. Finally, we upcycled Nemotron-4 15B on 1T tokens and compared it to a continuously trained version of the same model on the same 1T tokens: the continuous trained model achieved 65.3% MMLU, whereas the upcycled model achieved 67.6%. Our results offer insights and best practices to effectively leverage upcycling for building MoE language models.

Community

Paper author

I'm excited to share our latest research on Mixture of Experts! Check it out and feel free to reach out if you have any questions!

Thank you for sharing great paper and extensive experimental results. I have a question on Figure 9 which is reverse to text that "using weight scaling helps achieve better loss". Figure 9 shows w/o weight scaling is better than with.

image.png

·
Paper author

that's a typo. Thanks for pointing it out

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.07524 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.07524 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.07524 in a Space README.md to link it from this page.

Collections including this paper 1