mboehle's picture
Update README.md
4c33eea verified
|
raw
history blame
4.91 kB
metadata
license: apache-2.0
language:
  - en
base_model:
  - kyutai/moshika-pytorch-bf16
  - mistralai/Pixtral-12B-2409
  - mistral-community/pixtral-12b

Model Card for Moshika Vision

Model Details

Model Description

MoshiVis is a perceptually augmented version of Moshi, giving it the ability to freely discuss images whilst maintaining its natural conversation style and low latency. To achieve this, Moshi has been extended with a visual backbone and a cross-attention mechanism to infuse the visual information into the language model.

  • Developed by: Kyutai
  • Model type: Multimodal speech+vision+text foundation model
  • Language(s) (NLP): English
  • License: Apache License 2.0
  • Finetuned from model: Moshika and Pixtral

Model Sources [optional]

Uses

Direct Use

Similar to Moshi itself, MoshiVis can be used as a conversational agent for casual conversations, basic facts and advice (e.g. recipes, trivia), roleplay, etc. In addition, MoshiVis is able to recognize and discuss images in a natural way, whilst still allowing for low-latency interactions.

Out-of-Scope Use

The model is not intended to be used to impersonate other people or any malicious use of any kind. This model is for research only and we do not recommend it for providing advices or to perform any professionnal duty.

Bias, Risks, and Limitations

MoshiVis has been designed to perceptually augment the original Moshi model with vision capabilities and is expected to inherit similar biases and limitations, see also Moshika. Our analysis with respect to how much MoshiVis diverges from the original model is still ongoing.

How to Get Started with the Model

See the README file for getting started. <-- TODO: Update / check link

Training Details

Model Architecture and Objective

Our goal was to design an efficient and effective adaptation mechanism that allows Moshi to discuss images whilst maintaining its previous conversational capabilities. To achieve this, we train a cross-attention mechanism to insert image information from a pretrained and frozen vision backbone into the underlying language model, which is also kept frozen. An additional gating mechanism ensures that the insertion of visual information does not impact the interaction with Moshi outside of discussions of images, allowing for a seamless back and forth between general and image-specific conversations.

Training Procedure

Stay tuned for our technical report, in which we will describe the training procedure in detail!

Training Data

For information on the training data used for the base models, see Pixtral and Moshi respectively. To train the cross-attention and gating mechanism that MoshiVis uses for processing images, we rely on a collection of publicly available datasets:

We will share additional details soon, stay tuned!

Compute Infrastructure

MoshiVis was designed as a relatively low-cost adaptation of Moshi and was trained on a single DGX node with 8 H100 GPUs provided by Scaleway.

Model Card Authors

Amélie Royer, Moritz Böhle