GGUF
conversational

"Failed to load" error for gpt-oss-20b-GGUF on MacBook Air M1 (16GB) in LM Studio

#7
by e-druzyakin - opened

Hi, community!

I'm trying to run gpt-oss-20b-GGUF in LM Studio (v0.3.22) on a MacBook Air M1 with 16GB RAM, but I get this error:
"Failed to load the model. Model loading aborted due to insufficient system resources."

I've restarted the system and closed all apps, but no luck. Has anyone run this model on a Mac M1/M2 with 16GB? Any LM Studio settings (e.g., GPU Offload, quantization) or alternatives like Ollama that might help?

Thanks for any tips!

Sign up or log in to comment