File size: 403 Bytes
f2e2c7e
 
 
 
 
 
9b713a6
 
 
6c521da
f2e2c7e
6c521da
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
---
tags:
- llava
- lmm
- ggml
- llama.cpp
---

# ggml_llava-v1.5-7b

This repo contains GGUF files to inference [llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) with [llama.cpp](https://github.com/ggerganov/llama.cpp) end-to-end without any extra dependency.

**Note**: The `mmproj-model-f16.gguf` file structure is experimental and may change. Always use the latest code in llama.cpp.