Error loading model. (Exit code: 18446744072635810000). Unknown error. Try a different model and/or config.
I have a 3090 and 128gb of RAM. I should be able to load this fine. Any ideas? Using it with LMStudio.
same
Update your LM Studio apps.
thanks, it helped
My app is fully up to date
My app is fully up to date
I ran into a similar issue with LM Studio and thought it was up to date because I used the in-app updater. When I went to the LM Studio website I realized that my 0.2.xx version was not auto-updating to 0.3.xx and once I manually downloaded and reinstalled the binaries, everything is working as expected with my 5070TI.
I still have the same issue after updating to LM Studio 0.3.22 (Build 1)
Same here (on Linux Mint)
π₯² Error loading model. (Exit code: null). Please check settings and try loading the model again.
Update: When I change the engine to CPU then the model has been loaded without error. So it seems it doesn't work with AMD (rocm) gpu.
I get the same error as the subject line with version 0.3.22 build 2, although now the progress bar actually gets about 10-20% of the way before it fails instead of failing instantly (it still only takes about 1 second to fail though).
I upgraded LMStudio to v0.3.22, and it works fine with all of 2 models (20b/120b)
I discovered the solution to my problem... I noticed the downloads icon in the lower left corner of the interface looked different. Opening it, I discovered that "llama.cpp-win-x86_64-nvidia-cuda12-avx2 (1.44.0)" ran into a permissions issue trying to move a file. I clicked the retry button and it succeeded, then displayed a message that the new version would be used for subsequent models loaded. After this, I was able to load the gps-oss-20b model successfully.
ROCm llama.cpp doesn't have support for this model because its still on 1.43.1 where version with support for this model is in 1.44.0. So use Vulkan llama.cpp for now