KeyError when running the code in on the README

#1
by fepegar - opened

Hi! Thanks for sharing your models.

FYI I tried to run the snippet in the README on Colab and got the error below. My version of transformers is 4.55.2 (latest on PyPI).

I see the model was added in this interestingly named pull request, so I guess it hasn't been released yet. Things work when I pip-install from git+https://github.com/huggingface/transformers.

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
   1270             try:
-> 1271                 config_class = CONFIG_MAPPING[config_dict["model_type"]]
   1272             except KeyError:

3 frames
/usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py in __getitem__(self, key)
    965         if key not in self._mapping:
--> 966             raise KeyError(key)
    967         value = self._mapping[key]

KeyError: 'dinov3_vit'

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
/tmp/ipython-input-2082298813.py in <cell line: 0>()
      5 image = load_image(url)
      6 
----> 7 feature_extractor = pipeline(
      8     model="facebook/dinov3-vitb16-pretrain-lvd1689m",
      9     task="image-feature-extraction",

/usr/local/lib/python3.11/dist-packages/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
    907                     model = adapter_config["base_model_name_or_path"]
    908 
--> 909         config = AutoConfig.from_pretrained(
    910             model, _from_pipeline=task, code_revision=code_revision, **hub_kwargs, **model_kwargs
    911         )

/usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
   1271                 config_class = CONFIG_MAPPING[config_dict["model_type"]]
   1272             except KeyError:
-> 1273                 raise ValueError(
   1274                     f"The checkpoint you are trying to load has model type `{config_dict['model_type']}` "
   1275                     "but Transformers does not recognize this architecture. This could be because of an "

ValueError: The checkpoint you are trying to load has model type `dinov3_vit` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
AI at Meta org

Please refer to the pinned discussion for this common issue

patricklabatut changed discussion status to closed

Sign up or log in to comment