Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
BAAI
/
BGE-VL-MLLM-S2
like
11
Follow
Beijing Academy of Artificial Intelligence
1.6k
Safetensors
English
llava_next
multimodal-retrieval
embedding-model
custom_code
arxiv:
2412.14475
License:
mit
Model card
Files
Files and versions
Community
3
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Can task instruction be changed? And tokenizer bug? what's the difference between q and c embedding?
#3 opened 27 days ago by
Labmem009
Expanding inputs for image tokens in LLaVa-NeXT should be done in processing. Please add `patch_size` and `vision_feature_select_strategy` to the model's processing config
1
#2 opened about 1 month ago by
thirdinwinter
可惜没有中文
#1 opened about 1 month ago by
limoncc