Video-Text-to-Text
Transformers
Safetensors
English
llava
text-generation
multimodal
Eval Results
Inference Endpoints

size mismatch for vision_model

#10
by XiaoHangjia - opened

image.png
size mismatch for vision_model.post_layernorm.bias: copying a param with shape torch.Size([1152]) from checkpoint, the shape in current model is torch.Size([768]).
You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.

same problem

same problem.How to solve it???Please.

I tried this method and it works for me.
image.png

It works. Thank you!!!
However,There are some warnings:
UserWarning: for vision_model.head.mlp.fc2.bias: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass assign=True to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)
Have you encountered it?

Sign up or log in to comment