Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
gtvracerΒ 
posted an update 8 days ago
Post
780
Is HuggingFace having issues with meta-llama/Llama-3.2-3B-Instruct? InferenceClient isn't returning any results.

Aside from the large-scale 500 errors yesterday and the day before, the Serverless Inference API has been experiencing frequent errors (presumably due to the major revamp) in recent months.

Perhaps, that error is part of it.
https://discuss.huggingface.co/t/hf-inference-api-last-few-minutes-returns-the-same-404-exception-to-all-models/146646
https://discuss.huggingface.co/t/inference-api-stopped-working/150492
https://github.com/huggingface/hub-docs/issues/1694

which provider do you use?

We'll ship a provider="auto" in the coming days BTW, cc @sbrandeis @Wauplin @celinah

In the meantime, the model is served by those providers, you can use one of them, for instance, add provider="novita" to your code:

image.png

This comment has been hidden (marked as Resolved)