Using llama 3.2 Vision model deployed on VertexAI modelgarden for an text+image input.

#86
by PrajwalM - opened

Hi @all
for our usecase we have deployed the Llama3.2-11B-Vision-Instruct and we are able to get the response for text inputs, but when we input the image as base64 encoded image its showing us 'non-leading images are not supported' also the images we had to resize to 64, 64 resolution to not get the token limit error.
Please let us know if there an effective implementation example of this?

Thanks in advance,

Sign up or log in to comment