File size: 1,785 Bytes
2727e14 4882783 2727e14 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
---
base_model: jackboot/uwu-qwen-32b
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# jackboot/uwu-qwen-32b-Q4_K_M-GGUF
This model was converted to GGUF format from [`jackboot/uwu-qwen-32b`](https://huggingface.co/jackboot/uwu-qwen-32b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jackboot/uwu-qwen-32b) for more details on the model.
## Use with llama.cpp
I hope that it's an up to date enough l.cpp. In ooba you can swap the tokenizers around using llama.cpp HF. This was created with the default mergekit tokenizer which uses the QWQ bos/eos token.
### CLI:
```bash
llama-cli --hf-repo jackboot/uwu-qwen-32b-Q4_K_M-GGUF --hf-file uwu-qwen-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jackboot/uwu-qwen-32b-Q4_K_M-GGUF --hf-file uwu-qwen-32b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jackboot/uwu-qwen-32b-Q4_K_M-GGUF --hf-file uwu-qwen-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jackboot/uwu-qwen-32b-Q4_K_M-GGUF --hf-file uwu-qwen-32b-q4_k_m.gguf -c 2048
```
|