Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,8 @@ We provide the same model weights for other backends and quantization formats in
|
|
26 |
- **Model type:** Multimodal speech+vision+text foundation model
|
27 |
- **Language(s) (NLP):** English
|
28 |
- **License:** CC-BY-4.0
|
29 |
-
- **
|
|
|
30 |
|
31 |
|
32 |
### Model Sources
|
|
|
26 |
- **Model type:** Multimodal speech+vision+text foundation model
|
27 |
- **Language(s) (NLP):** English
|
28 |
- **License:** CC-BY-4.0
|
29 |
+
- **Uses frozen components from:** [Moshika](https://huggingface.co/kyutai/moshika-pytorch-bf16) and [PaliGemma2](https://huggingface.co/google/paligemma2-3b-pt-448)
|
30 |
+
- **Terms of use:** As the released models include frozen weights of the SigLIP image encoder from PaliGemma-2, MoshiVis is subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
|
31 |
|
32 |
|
33 |
### Model Sources
|