Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -9,7 +9,6 @@ library_name: peft
|
|
9 |
tags:
|
10 |
- llama-cpp
|
11 |
- gguf-my-lora
|
12 |
-
pipeline_tag: text2text-generation
|
13 |
---
|
14 |
|
15 |
# eltorio/Llama-3.2-3B-appreciation-F16-GGUF
|
@@ -18,14 +17,12 @@ Refer to the [original adapter repository](https://huggingface.co/eltorio/Llama-
|
|
18 |
|
19 |
## Use with llama.cpp
|
20 |
|
21 |
-
- Download the [base model in GGUF format](https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-f16.gguf)
|
22 |
-
- Download the [LoRa adapter in GGUF format](https://huggingface.co/eltorio/Llama-3.2-3B-appreciation-F16-GGUF/blob/main/Llama-3.2-3B-appreciation-f16.gguf)
|
23 |
```bash
|
24 |
# with cli
|
25 |
-
llama-cli -
|
26 |
|
27 |
# with server
|
28 |
-
llama-
|
29 |
```
|
30 |
|
31 |
-
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|
|
|
9 |
tags:
|
10 |
- llama-cpp
|
11 |
- gguf-my-lora
|
|
|
12 |
---
|
13 |
|
14 |
# eltorio/Llama-3.2-3B-appreciation-F16-GGUF
|
|
|
17 |
|
18 |
## Use with llama.cpp
|
19 |
|
|
|
|
|
20 |
```bash
|
21 |
# with cli
|
22 |
+
llama-cli -m base_model.gguf --lora Llama-3.2-3B-appreciation-f16.gguf (...other args)
|
23 |
|
24 |
# with server
|
25 |
+
llama-server -m base_model.gguf --lora Llama-3.2-3B-appreciation-f16.gguf (...other args)
|
26 |
```
|
27 |
|
28 |
+
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
|