tags: | |
- llava | |
- lmm | |
- ggml | |
- llama.cpp | |
# ggml_llava-v1.5-7b | |
This repo contains GGUF files to inference [llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) with [llama.cpp](https://github.com/ggerganov/llama.cpp) end-to-end without any extra dependency. | |
**Note**: The `mmproj-model-f16.gguf` file structure is experimental and may change. Always use the latest code in llama.cpp. | |