--- license: apache-2.0 language: - en base_model: - kyutai/moshika-pytorch-bf16 - mistralai/Pixtral-12B-2409 - mistral-community/pixtral-12b --- # Model Card for Moshika Vision ## Model Details ### Model Description MoshiVis is a perceptually augmented version of Moshi, giving it the ability to freely discuss images whilst maintaining its natural conversation style and low latency. To achieve this, Moshi has been extended with a visual backbone and a cross-attention mechanism to infuse the visual information into the language model. - **Developed by:** Kyutai - **Model type:** Multimodal speech+vision+text foundation model - **Language(s) (NLP):** English - **License:** Apache License 2.0 - **Finetuned from model:** [Moshika](https://huggingface.co/kyutai/moshika-vis-pytorch-bf16) and [Pixtral](https://huggingface.co/mistral-community/pixtral-12b) ### Model Sources [optional] - **Repository:** [Github kyutai-labs/moshivis](https://github.com/kyutai-labs/moshivis) <-- TODO: Update / check link - **Demo [optional]:** [moshi.chat](https://moshi.chat/) ## Uses ### Direct Use Similar to Moshi itself, MoshiVis can be used as a conversational agent for casual conversations, basic facts and advice (e.g. recipes, trivia), roleplay, etc. In addition, MoshiVis is able to recognize and discuss images in a natural way, whilst still allowing for low-latency interactions. ### Out-of-Scope Use The model is not intended to be used to impersonate other people or any malicious use of any kind. This model is for research only and we do not recommend it for providing advices or to perform any professionnal duty. ## Bias, Risks, and Limitations MoshiVis has been designed to perceptually augment the original Moshi model with vision capabilities and is expected to inherit similar biases and limitations, see also [Moshika](https://huggingface.co/kyutai/moshika-vis-pytorch-bf16). Our analysis with respect to how much MoshiVis diverges from the original model is still ongoing. ## How to Get Started with the Model See the [README file](https://github.com/kyutai-labs/moshivis) for getting started. <-- TODO: Update / check link ## Training Details ### Model Architecture and Objective Our goal was to design an efficient and effective adaptation mechanism that allows Moshi to discuss images whilst maintaining its previous conversational capabilities. To achieve this, we train a cross-attention mechanism to insert image information from a pretrained and frozen vision backbone into the underlying language model, which is also kept frozen. An additional gating mechanism ensures that the insertion of visual information does not impact the interaction with Moshi outside of discussions of images, allowing for a seamless back and forth between general and image-specific conversations. ### Training Procedure Stay tuned for our technical report, in which we will describe the training procedure in detail! ### Training Data For information on the training data used for the base models, see [Pixtral](https://mistral.ai/news/pixtral-12b/) and [Moshi](https://huggingface.co/kyutai/moshika-pytorch-bf16) respectively. To train the cross-attention and gating mechanism that MoshiVis uses for processing images, we rely on a collection of publicly available datasets: - [Pixelprose](https://arxiv.org/abs/2406.10328) - [DOCCI](https://arxiv.org/abs/2404.19753) - [TallyQA](https://arxiv.org/abs/1810.12440) - [OCRVQA](https://ocr-vqa.github.io/) - [RenderedText](https://huggingface.co/datasets/wendlerc/RenderedText) - [DocVQA](https://arxiv.org/abs/2007.00398) - [ChartQA](https://aclanthology.org/2022.findings-acl.177/) We will share additional details soon, stay tuned! ### Compute Infrastructure MoshiVis was designed as a relatively low-cost adaptation of Moshi and was trained on a single DGX node with 8 H100 GPUs provided by Scaleway. ## Model Card Authors Amélie Royer, Moritz Böhle