R1-1.5B-Q8_0-GGUF / README.md
KatyTheCutie's picture
Upload README.md with huggingface_hub
573dd65 verified
---
base_model: KatyTheCutie/R1-1.5B
tags:
- llama-cpp
- gguf-my-lora
---
# KatyTheCutie/R1-1.5B-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`KatyTheCutie/R1-1.5B`](https://huggingface.co/KatyTheCutie/R1-1.5B) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/KatyTheCutie/R1-1.5B) for more details.
## Use with llama.cpp
```bash
# with cli
llama-cli -m base_model.gguf --lora R1-1.5B-q8_0.gguf (...other args)
# with server
llama-server -m base_model.gguf --lora R1-1.5B-q8_0.gguf (...other args)
```
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).