
Solving Real World Problems
classroom
AI & ML interests
AI engineers
AI-basic-class's activity
Post
8584
If your Space stops working after restarting mainly for the last 5 days (https://discuss.huggingface.co/t/my-space-suddenly-went-offline-the-cpu-cannot-restart/151121/22), try some of following.
1. Add
2. Upgrade PyTorch to 2.2.0 or later (
3. Fix Transformers to 4.49.0 or earlier (
4. Fix
5. Specifying
About
https://discuss.huggingface.co/t/error-no-api-found/146226
https://discuss.huggingface.co/t/internal-server-error-bool-not-iterable/149494
Edit:
Zero GPU space has been upgraded from A100 to H200.
This is likely the reason why older versions of PyTorch are no longer supported.
In fact, an error message to that effect was displayed.
zero-gpu-explorers/README#163
1. Add
pydantic==2.10.6
to requirements.txt
or upgrade Gradio to the latest version.2. Upgrade PyTorch to 2.2.0 or later (
torch>=2.2.0
for Zero GPU space).3. Fix Transformers to 4.49.0 or earlier (
transformers<=4.49.0
for spaces using Transformers or Diffusers).4. Fix
huggingface_hub
to the old version (huggingface_hub==0.25.2
for if an error like cached_download
is not available occurs or inference does not work properly)5. Specifying
WORKDIR
in Dockerfile
may cause the application to fail to start with error 137. (Docker Spaces, https://discuss.huggingface.co/t/error-code-137-cache-error/152177)About
pydantic==2.10.6
:https://discuss.huggingface.co/t/error-no-api-found/146226
https://discuss.huggingface.co/t/internal-server-error-bool-not-iterable/149494
Edit:
Zero GPU space has been upgraded from A100 to H200.
This is likely the reason why older versions of PyTorch are no longer supported.
In fact, an error message to that effect was displayed.
zero-gpu-explorers/README#163
Post
10251
I used up my Zero GPU Quota yesterday (about 12 hours ago). At the time, I got a message saying โRetry at 13:45 (approx.)โ, but now it's just changed to โRetry at 03:22โ.
Anyway, everyone, let's be careful not to use up our Quota...
Related: https://huggingface.co/posts/Keltezaa/754755723533287#67e6ed5e3394f1ed9ca41dbd
Anyway, everyone, let's be careful not to use up our Quota...
Related: https://huggingface.co/posts/Keltezaa/754755723533287#67e6ed5e3394f1ed9ca41dbd

Alanturner2ย
updated
a
Space
5 months ago

Alanturner2ย
updated
a
model
5 months ago

Alanturner2ย
published
a
model
5 months ago

Alanturner2ย
updated
a
Space
5 months ago

Alanturner2ย
published
a
Space
5 months ago
KellanBrooksย
updated
a
model
6 months ago
Post
26082
@victor
@not-lain
There has been a sudden and unusual outbreak of spam postings on the HF Forum that seem to be aimed at relaying online videos and commenting on them. It is also spanning multiple languages for some reason. I've flagged it too, but I'm not sure if the staff will be able to keep up with the manual measures in the future.
Post
23961
@victor
Sorry for the repetitiveness.
I'm not sure if Post is the right place to report such an error, but it seems to be a server error unrelated to the Zero GPU space error the other day, so I don't know where else to report it.
Since this morning, I have been getting a strange error when running inference from space in Gradio 3.x.
Yntec ( @Yntec ) discovered it, but he is not in the Pro subscription, so I am reporting it on behalf of him.
The error message is as follows: 1girl and other prompts will show cached output, so experiment with unusual prompts.
Thank you in advance.
John6666/blitz_diffusion_error
John6666/GPU-stresser-t2i-error
I'm not sure if Post is the right place to report such an error, but it seems to be a server error unrelated to the Zero GPU space error the other day, so I don't know where else to report it.
Since this morning, I have been getting a strange error when running inference from space in Gradio 3.x.
Yntec ( @Yntec ) discovered it, but he is not in the Pro subscription, so I am reporting it on behalf of him.
The error message is as follows: 1girl and other prompts will show cached output, so experiment with unusual prompts.
Thank you in advance.
John6666/blitz_diffusion_error
John6666/GPU-stresser-t2i-error
ValueError: Could not complete request to HuggingFace API, Status Code: 500, Error: unknown error, Warnings: ['CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF', 'There was an inference error: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 14.75 GiB total capacity; 1.90 GiB already allocated; 3.06 MiB free; 1.95 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF']
Post
2820
@victor
Excuse me.
I would like to report the following bug or new specification that is probably the cause of the fatal stacks that are occurring in the Zero GPU space throughout HF.
Thanks.
zero-gpu-explorers/README#104
Excuse me.
I would like to report the following bug or new specification that is probably the cause of the fatal stacks that are occurring in the Zero GPU space throughout HF.
Thanks.
zero-gpu-explorers/README#104