Spaces:
Running
on
Zero
Deploy to inference endpoint
Qwen2.5-VL-3B-Instruct
Can you help me to deploy this model on huggingface inference endpoint point?
Its giving me all sort of errors. I need to extract data from receipts and invoices and i am surprised by the app you have hosted given that its 3b model. Its working perfectly cant ask more than this.
Model: https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct
or Deploy on runpod serverless
Runpod.io: https://www.runpod.io/console/serverless
thank you for prompt response if you will try deploy it will give you error.
let me try runpod if it works
RunPod is the fastest and most dynamic way to deploy across all option or else build your own endpoint
Simply !!,
Qwen2_5_VLForConditionalGeneration
model only supports the latest version of transformers
, which may be a -dev
version. If the server machine's kernel is not updated, the Qwen2.5 VL architecture might not be recognized.
Even if I upgrade, downgrade, or change the version, the Space remains stuck with the same configuration errors:
git+https://github.com/huggingface/transformers.git
I recommend deploying it after some time, allowing providers to update their versions.
ok thank you so much for helping.