Fixed README file.
Browse files
README.md
CHANGED
@@ -165,7 +165,7 @@ First [build](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
|
|
165 |
$ llama-server --hf-repo mirekphd/gte-Qwen2-1.5B-instruct-F16 --hf-file gte-Qwen2-1.5B-instruct-F16.gguf --n-gpu-layers 0 --ctx-size 131072 --embeddings
|
166 |
|
167 |
# using a previously downloaded local model file(s)
|
168 |
-
$ llama-server --model <path-to-hf-models>/mirekphd/gte-Qwen2-1.5B-instruct-F16.gguf --n-gpu-layers 0 --ctx-size 131072 --embeddings
|
169 |
```
|
170 |
|
171 |
## Evaluation
|
|
|
165 |
$ llama-server --hf-repo mirekphd/gte-Qwen2-1.5B-instruct-F16 --hf-file gte-Qwen2-1.5B-instruct-F16.gguf --n-gpu-layers 0 --ctx-size 131072 --embeddings
|
166 |
|
167 |
# using a previously downloaded local model file(s)
|
168 |
+
$ llama-server --model <path-to-hf-models>/mirekphd/gte-Qwen2-1.5B-instruct-F16/gte-Qwen2-1.5B-instruct-F16.gguf --n-gpu-layers 0 --ctx-size 131072 --embeddings
|
169 |
```
|
170 |
|
171 |
## Evaluation
|