is there difference between llava-hf/llava-1.5-7b-hf and liuhaotian/llava-v1.5-7b ???

#28
by hirasoooo - opened

is there structure difference between llava-hf/llava-1.5-7b-hf and liuhaotian/llava-v1.5-7b ???
when I train llava-hf/llava-1.5-7b-hf using lora in pefr library, then can I directly adapt the lora parameter to liuhaotian/llava-v1.5-7b??

hirasoooo changed discussion title from is there performance difference between llava-hf/llava-1.5-7b-hf and liuhaotian/llava-v1.5-7b ??? to is there difference between llava-hf/llava-1.5-7b-hf and liuhaotian/llava-v1.5-7b ???

I do observe that the processors are different. Did you find that they can be directly transferred in some way?

I want to know that too. Have you figured it out?

Llava Hugging Face org

Hi,

Both implementations are equivalent, but the image processing is not at the moment, since the original repo applies padding to the image before applying CLIPImageProcessor.

See here on how to obtain equivalent results: https://github.com/huggingface/transformers/issues/33175#issuecomment-2496001229

Sign up or log in to comment