Text-to-Image
Diffusers
Safetensors
StableDiffusionPipeline
stable-diffusion
Inference Endpoints

Will not work with Nvidia Geforce 1660 Super

#23
by GreatBizarro - opened

Assuming it must be a memory problem as this card only has 6gb of mem. It runs at a reduced size but out of mem at 768. At the reduced size I get an output that is just a scramble of colored blobs.

deleted
This comment has been hidden

Hey, I have the exact graphics card. It will work fine but you need to give it launch arguments.

What you need are:
--precision full
--no-half
--no-half-vae
--medvram

Dont know which platform you're using to do it but at least it worked alright with these on Automatic1111's webGUI version.

I did that and it still doesn't work, just keep getting black screen output. Worked fine with those settings in 1.4

Thought this was a general question and not specified for SD 2.0.
Depends on what platform you're using but SD2.0 model dont work on some. (at least that was a case for me)
Automatic1111's webgui now has a fix.

I found a fix for my GeForce 1660 Super to work with half precision and no extra parameters (i use only --xformers and nothing more)

In Automatic1111 go to Settings > Optimizations > Check «Pad prompt/negative prompt to be same length» to get ride of Unet who mess with prompt splitting/queuing when you exceed de 75 token size in prompt (it split it to a new chunk and queue them. There the problem happens for 16xx CG)

If you are not using Automatic1111 the parameter name is: "pad_cond_uncond" (if you can find it in your config file)

I can now generate very big prompt/negative prompt and never saw a black image anymore
My topic about my test who make me find the fix: https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/13154

Sign up or log in to comment