Update README.md
Browse files
README.md
CHANGED
@@ -5,4 +5,50 @@ tags:
|
|
5 |
|
6 |
## MiniCPM-o 2.6
|
7 |
|
8 |
-
This repository contains the [MiniCPM-o 2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6) model weights in GGUF format, used for llama.cpp.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
## MiniCPM-o 2.6
|
7 |
|
8 |
+
This repository contains the [MiniCPM-o 2.6](https://huggingface.co/openbmb/MiniCPM-o-2_6) model weights in GGUF format, used for llama.cpp.
|
9 |
+
|
10 |
+
Currently, this readme only supports minicpm-omni's vision capabilities, and we will update the full-mode support as soon as possible.
|
11 |
+
|
12 |
+
### Prepare models and code
|
13 |
+
|
14 |
+
Download [MiniCPM-o-2_6](https://huggingface.co/openbmb/MiniCPM-o-2_6) PyTorch model from huggingface to "MiniCPM-o-2_6" folder.
|
15 |
+
|
16 |
+
Clone llama.cpp:
|
17 |
+
```bash
|
18 |
+
git clone [email protected]:OpenBMB/llama.cpp.git
|
19 |
+
cd llama.cpp
|
20 |
+
git checkout minicpm-omni
|
21 |
+
```
|
22 |
+
|
23 |
+
### Usage of MiniCPM-o 2.6
|
24 |
+
|
25 |
+
Convert PyTorch model to gguf files (You can also download the converted [gguf](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) by us)
|
26 |
+
|
27 |
+
```bash
|
28 |
+
python ./examples/llava/minicpmv-surgery.py -m ../MiniCPM-o-2_6
|
29 |
+
python ./examples/llava/minicpmv-convert-image-encoder-to-gguf.py -m ../MiniCPM-o-2_6 --minicpmv-projector ../MiniCPM-o-2_6/minicpmv.projector --output-dir ../MiniCPM-o-2_6/ --image-mean 0.5 0.5 0.5 --image-std 0.5 0.5 0.5 --minicpmv_version 4
|
30 |
+
python ./convert_hf_to_gguf.py ../MiniCPM-o-2_6/model
|
31 |
+
|
32 |
+
# quantize int4 version
|
33 |
+
./llama-quantize ../MiniCPM-o-2_6/model/ggml-model-f16.gguf ../MiniCPM-o-2_6/model/ggml-model-Q4_K_M.gguf Q4_K_M
|
34 |
+
```
|
35 |
+
|
36 |
+
Build llama.cpp using `CMake`:
|
37 |
+
https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md
|
38 |
+
|
39 |
+
```bash
|
40 |
+
cmake -B build
|
41 |
+
cmake --build build --config Release
|
42 |
+
```
|
43 |
+
|
44 |
+
Inference on Linux or Mac
|
45 |
+
```
|
46 |
+
# run f16 version
|
47 |
+
./llama-minicpmv-cli -m ../MiniCPM-o-2_6/model/ggml-model-f16.gguf --mmproj ../MiniCPM-o-2_6/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jpg -p "What is in the image?"
|
48 |
+
|
49 |
+
# run quantized int4 version
|
50 |
+
./llama-minicpmv-cli -m ../MiniCPM-o-2_6/model/ggml-model-Q4_K_M.gguf --mmproj ../MiniCPM-o-2_6/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jpg -p "What is in the image?"
|
51 |
+
|
52 |
+
# or run in interactive mode
|
53 |
+
./llama-minicpmv-cli -m ../MiniCPM-o-2_6/model/ggml-model-Q4_K_M.gguf --mmproj ../MiniCPM-o-2_6/mmproj-model-f16.gguf -c 4096 --temp 0.7 --top-p 0.8 --top-k 100 --repeat-penalty 1.05 --image xx.jpg -i
|
54 |
+
```
|