--- license: mit --- # MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning ## Paper Authors Constantin Eichenberg, Sidney Black, Samuel Weinbach, [Aleph Alpha](https://aleph-alpha.com "Independent AI R&D") Letitia Parcalabescu, Anette Frank, [Heidelberg University](https://www.cl.uni-heidelberg.de "Computational Linguistics at Heidelberg University") ## Abstract Large-scale pretraining is fast becoming the norm in Vision-Language (VL) modeling. However, prevailing VL approaches are limited by the requirement for labeled data and the use of complex multi-step pretraining objectives. We present MAGMA - a simple method for augmenting generative language models with additional modalities using adapter-based finetuning. Building on Frozen, we train a series of VL models that autoregressively generate text from arbitrary combinations of visual and textual input. The pretraining is entirely end-to-end using a single language modeling objective, simplifying optimization compared to previous approaches. Importantly, the language model weights remain unchanged during training, allowing for transfer of encyclopedic knowledge and in-context learning abilities from language pretraining. MAGMA outperforms Frozen on open-ended generative tasks, achieving state of the art results on the OKVQA benchmark and competitive results on a range of other popular VL benchmarks, while pretraining on 0.2% of the number of samples used to train SimVLM. Paper on arXiv: https://arxiv.org/abs/2112.05253 ## Repository For the training and inference code, please refer to the [magma repository](https://github.com/Aleph-Alpha/magma) on GitHub. ## Model design ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/671a0238b080a748c29b8fea/TgBQtCGjTtKahRlnDkHZr.jpeg)