jw2yang commited on
Commit
1f82f11
·
1 Parent(s): fac8844
Files changed (1) hide show
  1. README.md +11 -7
README.md CHANGED
@@ -44,13 +44,17 @@ pipeline_tag: text-generation
44
 
45
  <!-- Provide a longer summary of what this model is. -->
46
 
47
- Magma is a multimodal agentic AI model that can generate text based on the input text and image. The model is designed for research purposes and aimed at knowledge-sharing and accelerating research in multimodal AI, in particular the multimodal agentic AI. The main innovation of this model lies on the introduction of two technical innovations: Set-of-Mark and Trace-of-Mark, and the leverage of a large-amount of unlabeled video data to learn the spatial-temporal grounding and planning. Please refer to our paper for more technical details.
48
 
49
- The model is developed by Microsoft and is funded by Microsoft Research.
 
 
 
 
50
 
51
- The model is shared by Microsoft Research and is licensed under the MIT License.
52
-
53
- The model is developed based on Meta LLama-3 as the LLM.
54
 
55
  <!-- {{ model_description | default("", true) }}
56
 
@@ -67,8 +71,8 @@ The model is developed based on Meta LLama-3 as the LLM.
67
  <!-- Provide the basic links for the model. -->
68
 
69
  - **Paper:** [Project Page](https://microsoft.github.io/Magma/)
70
- - **Repository:** [Magma Github Repo](https://github.com/microsoft/Magma)
71
- - **Paper:** [arXiv](https://www.arxiv.org/pdf/2502.13130)
72
 
73
  <!-- - **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}} -->
74
 
 
44
 
45
  <!-- Provide a longer summary of what this model is. -->
46
 
47
+ Magma is a multimodal agentic AI model that can generate text based on the input text and image. The model is designed for research purposes and aimed at knowledge-sharing and accelerating research in multimodal AI, in particular the multimodal agentic AI. The main innovation of this model lies on the introduction of two technical innovations: **Set-of-Mark** and **Trace-of-Mark**, and the leverage of a **large amount of unlabeled video data** to learn the spatial-temporal grounding and planning. Please refer to our paper for more technical details.
48
 
49
+ ## :sparkles: Highlights
50
+ * **Digital and Physical Worlds:** Magma is the first-ever foundation model for multimodal AI agents, designed to handle complex interactions across both virtual and real environments!
51
+ * **Versatile Capabilities:** Magma as a single model not only posseesses generic image and videos understanding ability, but alse generate goal-driven visual plans and actions, making it versatile for different agentic tasks!
52
+ * **State-of-the-art Performance:** Magma achieves state-of-the-art performance on various multimodal tasks, including UI navigation, robotics manipulation, as well as generic image and video understanding, in particular the spatial understanding and reasoning!
53
+ * **Scalable Pretraining Strategy:** Magma is designed to be **learned scalably from unlabeled videos** in the wild in addition to the existing agentic data, making it strong generalization ability and suitable for real-world applications!
54
 
55
+ NOTE: The model is developed by Microsoft and is funded by Microsoft Research.
56
+ NOTE: The model is shared by Microsoft Research and is licensed under the MIT License.
57
+ NOTE: The model is developed based on Meta LLama-3 as the LLM.
58
 
59
  <!-- {{ model_description | default("", true) }}
60
 
 
71
  <!-- Provide the basic links for the model. -->
72
 
73
  - **Paper:** [Project Page](https://microsoft.github.io/Magma/)
74
+ - **Repository:** [Github Repo](https://github.com/microsoft/Magma)
75
+ - **Paper:** [arXiv Paper](https://www.arxiv.org/pdf/2502.13130)
76
 
77
  <!-- - **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}} -->
78