Update README.md
Browse files
README.md
CHANGED
@@ -70,6 +70,20 @@ wget https://huggingface.co/xtuner/llava-phi-3-mini-gguf/resolve/main/llava-phi-
|
|
70 |
1. Build [llama.cpp](https://github.com/ggerganov/llama.cpp) ([docs](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage)) .
|
71 |
2. Build `./llava-cli` ([docs](https://github.com/ggerganov/llama.cpp/tree/master/examples/llava#usage)).
|
72 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
73 |
### Chat by `./llava-cli`
|
74 |
|
75 |
Note: llava-phi-3-mini uses the `Phi-3-instruct` chat template.
|
|
|
70 |
1. Build [llama.cpp](https://github.com/ggerganov/llama.cpp) ([docs](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage)) .
|
71 |
2. Build `./llava-cli` ([docs](https://github.com/ggerganov/llama.cpp/tree/master/examples/llava#usage)).
|
72 |
|
73 |
+
### Chat by `ollama`
|
74 |
+
|
75 |
+
Note: llava-phi-3-mini uses the `Phi-3-instruct` chat template.
|
76 |
+
|
77 |
+
```bash
|
78 |
+
# fp16
|
79 |
+
ollama create llava-phi3-f16 -f ./OLLAMA_MODELFILE_F16
|
80 |
+
ollama run llava-phi3-f16 "xx.png Describe this image"
|
81 |
+
|
82 |
+
# int4
|
83 |
+
ollama create llava-phi3-int4 -f ./OLLAMA_MODELFILE_F16
|
84 |
+
ollama run llava-phi3-int4 "xx.png Describe this image"
|
85 |
+
```
|
86 |
+
|
87 |
### Chat by `./llava-cli`
|
88 |
|
89 |
Note: llava-phi-3-mini uses the `Phi-3-instruct` chat template.
|