Key Error and Value Error in AutoModelForCausalLM.from_pretrained()
I got error when I tried to execute AutoModelForCausalLM.from_pretrained()
.
I reinstalled transformers from github.
But code does not working.
code
model = AutoModelForCausalLM.from_pretrained(
model_path,
low_cpu_mem_usage=True,
device_map="sequential",
max_memory=max_memory,
offload_folder=os.path.join("./tmp/", f"{uuid4()}"),
offload_state_dict=True,
torch_dtype=DTYPE,
)
Error Message
KeyError Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1154 try:
-> 1155 config_class = CONFIG_MAPPING[config_dict["model_type"]]
1156 except KeyError:
3 frames
KeyError: 'hyperclovax_vlm'
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.11/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
1155 config_class = CONFIG_MAPPING[config_dict["model_type"]]
1156 except KeyError:
-> 1157 raise ValueError(
1158 f"The checkpoint you are trying to load has model type {config_dict['model_type']}
"
1159 "but Transformers does not recognize this architecture. This could be because of an "
ValueError: The checkpoint you are trying to load has model type hyperclovax_vlm
but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
Hello. May I ask if you have registered the model with AutoConfig/AutoModel ?
Could you please try running the following code?
AutoConfig.register("hyperclovax_vlm", HCXVisionConfig)
AutoModelForCausalLM.register(HCXVisionConfig, HCXVisionForCausalLM)
Thank you.
We have updated the examples to make it easier for users to use the model.
Could you please take a look at the updated code and try again?
Thank you.