Out of memory during inference

#1
by fediazgon - opened

Hi! I was looking for a model that I could plug in my StableDifussionImg2Img pipeline for an E2E test, and I found a comment in this discussion: https://discuss.huggingface.co/t/smaller-pretrained-models-for-stable-diffusion/23574. However, when I try to images = pipe(prompt=prompt, image=init_image).images, I get a OOM error:

CUDA out of memory. Tried to allocate 18.00 GiB (GPU 0; 14.76 GiB total capacity; 78.61 MiB already allocated; 13.42 GiB free; 98.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I've also tried with CPU with the same result. Is there anything I can do to solve this issue?

Hugging Face Internal Testing Organization org
edited Jan 13, 2023

Hi,

This model is part of "hf-internal-testing". Hence it's a random model that will output random (garbage) things. It's not meant to be used.

It's recommended to use the official stable diffusion models, or any other diffusion model that doesn't have "test" in its name.

That was the goal of what I was trying to do. I don't mind output garbage as long as the loading and inference is fasts.

It seems that resizing the image to a very small size helped.

fediazgon changed discussion status to closed

Sign up or log in to comment