Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
Model Card for BLIP-2: Bootstrapping Language-Image Pre-training
BLIP-2 is a unified vision-language model designed for tasks such as image captioning, visual question answering, and more. It employs a novel pre-training strategy that leverages frozen pre-trained image encoders and large language models (LLMs) to efficiently bridge the modality gap between vision and language.
Model Details
Model Description
BLIP-2 (Bootstrapping Language-Image Pre-training) introduces a lightweight Querying Transformer (Q-Former) that connects a frozen image encoder with a frozen LLM. This architecture enables effective vision-language understanding and generation without the need for end-to-end training of large-scale models. The model is capable of zero-shot image-to-text generation and can follow natural language instructions.
- Developed by: Salesforce AI Research
- Funded by: Salesforce
- Shared by: Official BLIP-2 repository
- Model type: Vision-language model
- Language(s): English
- Finetuned from model: BLIP-2 base pretrained on COCO dataset
Model Sources
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support