Gaie commited on
Commit
41edc44
·
verified ·
1 Parent(s): 72671d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -124,7 +124,7 @@ widget:
124
  Beaver-Vision-11B is an <u>Image-Text-to-Text</u> chat assistant trained based on the [LLaMA-3.2-11B-Vision](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision) (**pretrained version**) using the [Align-Anything-Instruct](https://huggingface.co/datasets/PKU-Alignment/Align-Anything) dataset and [Align-Anything](https://github.com/PKU-Alignment/align-anything) framework.
125
 
126
  Beaver-Vision-11B aims to enhance the instruction-following abilities of MLLMs (Multi-modal Large Language Models).
127
- Compared with [LLaMA-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct), Beaver-Vision-11B uses [Align-Anything-Instruct](https://huggingface.co/datasets/PKU-Alignment/Align-Anything) dataset and post-training alignment method, achieving better performance. More importantly, Beaver-Vision-7B has open-sourced all of its training data, code, and evaluation scripts, providing greater convenience for the community and researchers.
128
 
129
  - **Developed by:** the [PKU-Alignment](https://github.com/PKU-Alignment) Team.
130
  - **Model Type:** An auto-regressive multi-modal (Image-Text-to-Text) language model based on the transformer architecture.
 
124
  Beaver-Vision-11B is an <u>Image-Text-to-Text</u> chat assistant trained based on the [LLaMA-3.2-11B-Vision](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision) (**pretrained version**) using the [Align-Anything-Instruct](https://huggingface.co/datasets/PKU-Alignment/Align-Anything) dataset and [Align-Anything](https://github.com/PKU-Alignment/align-anything) framework.
125
 
126
  Beaver-Vision-11B aims to enhance the instruction-following abilities of MLLMs (Multi-modal Large Language Models).
127
+ Compared with [LLaMA-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct), Beaver-Vision-11B uses [Align-Anything-Instruct](https://huggingface.co/datasets/PKU-Alignment/Align-Anything) dataset and post-training alignment method, achieving better performance. More importantly, Beaver-Vision-11B has open-sourced all of its training data, code, and evaluation scripts, providing greater convenience for the community and researchers.
128
 
129
  - **Developed by:** the [PKU-Alignment](https://github.com/PKU-Alignment) Team.
130
  - **Model Type:** An auto-regressive multi-modal (Image-Text-to-Text) language model based on the transformer architecture.