when i download the checkpoint locally it cannot run

#9
by GrandeSheng - opened

import torch
from PIL import Image
from transformers.utils.import_utils import is_flash_attn_2_available

from colpali_engine.models import ColQwen2_5, ColQwen2_5_Processor

model = ColQwen2_5.from_pretrained(
pretrained_model_name_or_path="models/colqwen2_5",
torch_dtype=torch.bfloat16,
device_map="cuda:0", # or "mps" if on Apple Silicon
attn_implementation="flash_attention_2" if is_flash_attn_2_available() else None,
).eval()
processor = ColQwen2_5_Processor.from_pretrained("models/colqwen2_5") this is my code and the program gives the bug information:raise OSError(
OSError: We couldn't connect to 'https://huggingface.co' to load the files, and couldn't find them in the cached files.
Check your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

Vidore org

your path looks incorrect, try pretrained_model_name_or_path="vidore/colqwen2.5-v0.2"

but i want to load the model which i download to my local server,the path u provide is downloading from online

This comment has been hidden
Vidore org
edited Aug 6

Note that if you're trying to run local models you should download both vidore/colqwen2.5-v0.2 and vidore/colqwen2.5-base since the trained model is a lora adaptation of the latter, and change the "base_model_name_or_path": "vidore/colqwen2.5-base", to point the the location of the base model in adapter_config.json file of colqwen2.5-v0.2

Note that if you're trying to run local modes you should download both vidore/colqwen2.5-v0.2 and vidore/colqwen2.5-base since the trained model is a lora adaptation of the latter, and change the "base_model_name_or_path": "vidore/colqwen2.5-base", to point the the location of the base model in adapter_config.json file of colqwen2.5-v0.2
so i should provide two args: 1. base_model_name_or_path="mymodels/colqwen2.5-base" 2. pretrained_model_name_or_path="mymodels/colqwen2.5-v0.2",is this right?

Vidore org

should work yes, but kwargs can be handled quite weirdly on huggingface, if that doesn't work just update the adapter_config file

QuentinJG changed discussion status to closed

Sign up or log in to comment