is it possible to get llama.cpp support? I already managed to convert the model to gguf with the mmproj file too but it has no inference support at this point /:
· Sign up or log in to comment