|
--- |
|
tags: |
|
- autotrain |
|
- text-generation-inference |
|
- text-generation |
|
- peft |
|
- llama-cpp |
|
- gguf-my-lora |
|
library_name: transformers |
|
base_model: shafire/AgentZero |
|
widget: |
|
- messages: |
|
- role: user |
|
content: What is your favorite condiment? |
|
license: other |
|
--- |
|
|
|
# shafire/AgentZero-F16-GGUF |
|
This LoRA adapter was converted to GGUF format from [`shafire/AgentZero`](https://huggingface.co/shafire/AgentZero) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. |
|
Refer to the [original adapter repository](https://huggingface.co/shafire/AgentZero) for more details. |
|
|
|
LICENSE: Zero Public Licence v1.0 |
|
Section 1 β Safety layer must stay intact. |
|
Section 2 β Export to states under UK embargo requires licence. |
|
Section 3 β Author disclaims forks that remove Section 1 or 2. |
|
|
|
## Use with llama.cpp |
|
|
|
```bash |
|
# with cli |
|
llama-cli -m base_model.gguf --lora AgentZero-f16.gguf (...other args) |
|
|
|
# with server |
|
llama-server -m base_model.gguf --lora AgentZero-f16.gguf (...other args) |
|
``` |
|
|
|
To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md). |
|
|