ysn-rfd's picture
Upload README.md with huggingface_hub
fd7bfc8 verified
---
base_model: Spestly/Athena-R3-1.5B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- llama-cpp
- gguf-my-repo
license: mit
language:
- en
- zh
- fr
- es
- pt
- de
- it
- ru
- ja
- ko
- vi
- th
- ar
- fa
- he
- tr
- cs
- pl
- hi
- bn
- ur
- id
- ms
- lo
- my
- ceb
- km
- tl
- nl
library_name: transformers
extra_gated_prompt: By accessing this model, you agree to comply with ethical usage
guidelines and accept full responsibility for its applications. You will not use
this model for harmful, malicious, or illegal activities, and you understand that
the model's use is subject to ongoing monitoring for misuse. This model is provided
'AS IS' and agreeing to this means that you are responsible for all the outputs
generated by you
extra_gated_fields:
Name: text
Organization: text
Country: country
Date of Birth: date_picker
Intended Use:
type: select
options:
- Research
- Education
- Personal Development
- Commercial Use
- label: Other
value: other
I agree to use this model in accordance with all applicable laws and ethical guidelines: checkbox
I agree to use this model under the MIT licence: checkbox
---
# ysn-rfd/Athena-R3-1.5B-Q8_0-GGUF
This model was converted to GGUF format from [`Spestly/Athena-R3-1.5B`](https://huggingface.co/Spestly/Athena-R3-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Athena-R3-1.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ysn-rfd/Athena-R3-1.5B-Q8_0-GGUF --hf-file athena-r3-1.5b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ysn-rfd/Athena-R3-1.5B-Q8_0-GGUF --hf-file athena-r3-1.5b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ysn-rfd/Athena-R3-1.5B-Q8_0-GGUF --hf-file athena-r3-1.5b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ysn-rfd/Athena-R3-1.5B-Q8_0-GGUF --hf-file athena-r3-1.5b-q8_0.gguf -c 2048
```