File size: 1,104 Bytes
059e2e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
base_model: Ketak-ZoomRx/Phi-3-medium-4k-instruct-biomarker
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- llama-cpp
- gguf-my-lora
license: apache-2.0
language:
- en
---

# farikaw599/Phi-3-medium-4k-instruct-biomarker-Q8_0-GGUF
This LoRA adapter was converted to GGUF format from [`Ketak-ZoomRx/Phi-3-medium-4k-instruct-biomarker`](https://huggingface.co/Ketak-ZoomRx/Phi-3-medium-4k-instruct-biomarker) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space.
Refer to the [original adapter repository](https://huggingface.co/Ketak-ZoomRx/Phi-3-medium-4k-instruct-biomarker) for more details.

## Use with llama.cpp

```bash
# with cli
llama-cli -m base_model.gguf --lora Phi-3-medium-4k-instruct-biomarker-q8_0.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora Phi-3-medium-4k-instruct-biomarker-q8_0.gguf (...other args)
```

To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).