jw2yang commited on
Commit
2635739
·
1 Parent(s): 1f82f11
Files changed (1) hide show
  1. README.md +2 -12
README.md CHANGED
@@ -30,7 +30,7 @@ pipeline_tag: text-generation
30
 
31
  <sup>*</sup> Project lead <sup>†</sup> First authors <sup>‡</sup> Second authors <sup>▽</sup> Leadership
32
 
33
- \[[arXiv Paper](https://www.arxiv.org/pdf/2502.13130)\] &nbsp; \[[Project Page](https://microsoft.github.io/Magma/)\] &nbsp; \[[Hugging Face Model](https://huggingface.co/microsoft/Magma-8B)\] &nbsp;
34
 
35
  </div>
36
 
@@ -46,7 +46,7 @@ pipeline_tag: text-generation
46
 
47
  Magma is a multimodal agentic AI model that can generate text based on the input text and image. The model is designed for research purposes and aimed at knowledge-sharing and accelerating research in multimodal AI, in particular the multimodal agentic AI. The main innovation of this model lies on the introduction of two technical innovations: **Set-of-Mark** and **Trace-of-Mark**, and the leverage of a **large amount of unlabeled video data** to learn the spatial-temporal grounding and planning. Please refer to our paper for more technical details.
48
 
49
- ## :sparkles: Highlights
50
  * **Digital and Physical Worlds:** Magma is the first-ever foundation model for multimodal AI agents, designed to handle complex interactions across both virtual and real environments!
51
  * **Versatile Capabilities:** Magma as a single model not only posseesses generic image and videos understanding ability, but alse generate goal-driven visual plans and actions, making it versatile for different agentic tasks!
52
  * **State-of-the-art Performance:** Magma achieves state-of-the-art performance on various multimodal tasks, including UI navigation, robotics manipulation, as well as generic image and video understanding, in particular the spatial understanding and reasoning!
@@ -66,16 +66,6 @@ NOTE: The model is developed based on Meta LLama-3 as the LLM.
66
  - **License:** {{ license | default("[More Information Needed]", true)}}
67
  - **Finetuned from model [optional]:** {{ base_model | default("[More Information Needed]", true)}} -->
68
 
69
- ### Model Sources
70
-
71
- <!-- Provide the basic links for the model. -->
72
-
73
- - **Paper:** [Project Page](https://microsoft.github.io/Magma/)
74
- - **Repository:** [Github Repo](https://github.com/microsoft/Magma)
75
- - **Paper:** [arXiv Paper](https://www.arxiv.org/pdf/2502.13130)
76
-
77
- <!-- - **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}} -->
78
-
79
  ## Intended Uses
80
 
81
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
30
 
31
  <sup>*</sup> Project lead <sup>†</sup> First authors <sup>‡</sup> Second authors <sup>▽</sup> Leadership
32
 
33
+ \[[arXiv Paper](https://www.arxiv.org/pdf/2502.13130)\] &nbsp; \[[Project Page](https://microsoft.github.io/Magma/)\] &nbsp; \[[Hugging Face Model](https://huggingface.co/microsoft/Magma-8B)\] &nbsp; \[[Github Repo](https://github.com/microsoft/Magma)\]
34
 
35
  </div>
36
 
 
46
 
47
  Magma is a multimodal agentic AI model that can generate text based on the input text and image. The model is designed for research purposes and aimed at knowledge-sharing and accelerating research in multimodal AI, in particular the multimodal agentic AI. The main innovation of this model lies on the introduction of two technical innovations: **Set-of-Mark** and **Trace-of-Mark**, and the leverage of a **large amount of unlabeled video data** to learn the spatial-temporal grounding and planning. Please refer to our paper for more technical details.
48
 
49
+ ### Highlights
50
  * **Digital and Physical Worlds:** Magma is the first-ever foundation model for multimodal AI agents, designed to handle complex interactions across both virtual and real environments!
51
  * **Versatile Capabilities:** Magma as a single model not only posseesses generic image and videos understanding ability, but alse generate goal-driven visual plans and actions, making it versatile for different agentic tasks!
52
  * **State-of-the-art Performance:** Magma achieves state-of-the-art performance on various multimodal tasks, including UI navigation, robotics manipulation, as well as generic image and video understanding, in particular the spatial understanding and reasoning!
 
66
  - **License:** {{ license | default("[More Information Needed]", true)}}
67
  - **Finetuned from model [optional]:** {{ base_model | default("[More Information Needed]", true)}} -->
68
 
 
 
 
 
 
 
 
 
 
 
69
  ## Intended Uses
70
 
71
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->