Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,8 @@ library_name: transformers
|
|
11 |
|
12 |
# <span style="color: #7FFF7F;">Qwen2.5-VL-7B-Instruct GGUF Models</span>
|
13 |
|
|
|
|
|
14 |
## How to Use Qwen 2.5 VL Instruct with llama.cpp
|
15 |
|
16 |
To utilize the experimental support for Qwen 2.5 VL in `llama.cpp`, follow these steps:
|
@@ -27,7 +29,7 @@ Note this uses a fork of llama.cpp. At this time the main branch does not suppor
|
|
27 |
|
28 |
Build llama.cpp as usual : https://github.com/ggml-org/llama.cpp#building-the-project
|
29 |
|
30 |
-
Once llama.cpp is built Copy the ./llama.cpp/build/bin/llama-qwen2-vl-cli to a chosen folder.
|
31 |
|
32 |
3. **Download the Qwen 2.5 VL gguf file**:
|
33 |
|
|
|
11 |
|
12 |
# <span style="color: #7FFF7F;">Qwen2.5-VL-7B-Instruct GGUF Models</span>
|
13 |
|
14 |
+
These files have been built using a imatrix file and latest llama.cpp build. You must use a fork of llama.cpp to run use vision with the model.
|
15 |
+
|
16 |
## How to Use Qwen 2.5 VL Instruct with llama.cpp
|
17 |
|
18 |
To utilize the experimental support for Qwen 2.5 VL in `llama.cpp`, follow these steps:
|
|
|
29 |
|
30 |
Build llama.cpp as usual : https://github.com/ggml-org/llama.cpp#building-the-project
|
31 |
|
32 |
+
Once the fork of llama.cpp is built Copy the ./llama.cpp/build/bin/llama-qwen2-vl-cli to a chosen folder.
|
33 |
|
34 |
3. **Download the Qwen 2.5 VL gguf file**:
|
35 |
|