Data and model collection for MARBLE: https://github.com/a43992899/MARBLE/
AI & ML interests
None defined yet.
Recent Activity
View all activity
Organization Card
Multimodal Art Projection (M-A-P) is an open-source AI research community.
The community members are working on research topics in a wide range of spectrum, including but not limited to pre-training paradigm of foundation models, large-scale data collection and processing, and the derived applciations on coding, reasoning and music creativity.
The community is open to researchers keen on any relevant topic. Welcome to join us!
- Discord Channel
- Our Full Paper List
- mail: [email protected]
The development log of our Multimodal Art Projection (m-a-p) model family:
- π₯28/01/2025: We release YuE (δΉ), the most powerful open-source foundation models for music generation, specifically for transforming lyrics into full songs (lyrics2song), like Suno.ai. See demos.
- π₯08/05/2024: We release the fully transparent large language model MAP-Neo, series models for scaling law exploraltion and post-training alignment, and along with the training corpus Matrix.
- π₯11/04/2024: MuPT paper and demo are out. HF collection.
- π₯08/04/2024: Chinese Tiny LLM is out. HF collection.
- π₯28/02/2024: The release of ChatMusician's demo, code, model, data, and benchmark. π
- π₯23/02/2024: The release of OpenCodeInterpreter, beats GPT-4 code interpreter on HumanEval.
- 23/01/2024: we release CMMMU for better Chinese LMMs' Evaluation.
- 13/01/2024: we release a series of Music Pretrained Transformer (MuPT) checkpoints, with size up to 1.3B and 8192 context length. Our models are LLAMA2-based, pre-trained on world's largest 10B tokens symbolic music dataset (ABC notation format). We currently support Megatron-LM format and will release huggingface checkpoints soon.
- 02/06/2023: officially release the MERT pre-print paper and training codes.
- 17/03/2023: we release two advanced music understanding models, MERT-v1-95M and MERT-v1-330M , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks.
- 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset MERT-v0-public
- 29/12/2022: a music understanding model MERT-v0 trained with MLM paradigm, which performs better at downstream tasks.
- 29/10/2022: a pre-trained MIR model music2vec trained with BYOL paradigm.
This is the collections of COIG-P's models
-
m-a-p/Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
Text Generation β’ 8B β’ Updated β’ 16 -
m-a-p/Qwen2.5-Instruct-7B-COIG-P
Text Generation β’ 8B β’ Updated β’ 9 -
m-a-p/Infinity-Instruct-3M-0625-Mistral-7B-COIG-P
Text Generation β’ 7B β’ Updated β’ 13 -
m-a-p/Qwen2-Instruct-7B-COIG-P
Text Generation β’ 8B β’ Updated β’ 8
Data and model collection for MARBLE: https://github.com/a43992899/MARBLE/
This is the collections of COIG-P's models
-
m-a-p/Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
Text Generation β’ 8B β’ Updated β’ 16 -
m-a-p/Qwen2.5-Instruct-7B-COIG-P
Text Generation β’ 8B β’ Updated β’ 9 -
m-a-p/Infinity-Instruct-3M-0625-Mistral-7B-COIG-P
Text Generation β’ 7B β’ Updated β’ 13 -
m-a-p/Qwen2-Instruct-7B-COIG-P
Text Generation β’ 8B β’ Updated β’ 8
models
122

m-a-p/xcodec
Updated
β’
2
β’
1

m-a-p/key_sota_20250618
Updated
β’
10
β’
1

m-a-p/MERT-v1-330M
Audio Classification
β’
Updated
β’
71.3k
β’
67

m-a-p/MERT-v1-95M
Audio Classification
β’
Updated
β’
2.04M
β’
33

m-a-p/Infinity-Instruct-3M-0625-Llama3-8B-COIG-P
Text Generation
β’
8B
β’
Updated
β’
16

m-a-p/Qwen2-Instruct-7B-COIG-P
Text Generation
β’
8B
β’
Updated
β’
8

m-a-p/Qwen2.5-Instruct-7B-COIG-P
Text Generation
β’
8B
β’
Updated
β’
9

m-a-p/CRM_llama3
Text Classification
β’
8B
β’
Updated
β’
8

m-a-p/Infinity-Instruct-3M-0625-Qwen2-7B-COIG-P
Text Generation
β’
8B
β’
Updated
β’
7

m-a-p/Infinity-Instruct-3M-0625-Mistral-7B-COIG-P
Text Generation
β’
7B
β’
Updated
β’
13
datasets
54
m-a-p/PIN-100M
Viewer
β’
Updated
β’
68.1k
β’
110k
β’
11
m-a-p/HookTheory
Preview
β’
Updated
β’
1
m-a-p/SciDA
Viewer
β’
Updated
β’
1k
β’
136
β’
5
m-a-p/Chords1217
Viewer
β’
Updated
β’
1.22k
β’
38
m-a-p/GTZAN
Preview
β’
Updated
β’
242
m-a-p/MTT
Updated
β’
120
m-a-p/EMO
Viewer
β’
Updated
β’
744
β’
194
m-a-p/GS
Viewer
β’
Updated
β’
7.04k
β’
571
m-a-p/ScaleLong
Updated
β’
51
m-a-p/COIG-Writer
Viewer
β’
Updated
β’
914
β’
351
β’
14