Novaciano commited on
Commit
56caf83
·
verified ·
1 Parent(s): 599caad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -44
README.md CHANGED
@@ -2,50 +2,22 @@
2
  library_name: transformers
3
  tags:
4
  - llama-cpp
5
- - gguf-my-repo
 
 
 
 
 
 
 
 
6
  base_model: braindao/DeepSeek-R1-Distill-Qwen-1.5B-Uncensored
 
 
 
 
 
 
7
  ---
8
 
9
- # Novaciano/DeepSeek-R1-Distill-Qwen-1.5B-Uncensored-Q5_K_M-GGUF
10
- This model was converted to GGUF format from [`braindao/DeepSeek-R1-Distill-Qwen-1.5B-Uncensored`](https://huggingface.co/braindao/DeepSeek-R1-Distill-Qwen-1.5B-Uncensored) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
- Refer to the [original model card](https://huggingface.co/braindao/DeepSeek-R1-Distill-Qwen-1.5B-Uncensored) for more details on the model.
12
-
13
- ## Use with llama.cpp
14
- Install llama.cpp through brew (works on Mac and Linux)
15
-
16
- ```bash
17
- brew install llama.cpp
18
-
19
- ```
20
- Invoke the llama.cpp server or the CLI.
21
-
22
- ### CLI:
23
- ```bash
24
- llama-cli --hf-repo Novaciano/DeepSeek-R1-Distill-Qwen-1.5B-Uncensored-Q5_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-uncensored-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
25
- ```
26
-
27
- ### Server:
28
- ```bash
29
- llama-server --hf-repo Novaciano/DeepSeek-R1-Distill-Qwen-1.5B-Uncensored-Q5_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-uncensored-q5_k_m-imat.gguf -c 2048
30
- ```
31
-
32
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
33
-
34
- Step 1: Clone llama.cpp from GitHub.
35
- ```
36
- git clone https://github.com/ggerganov/llama.cpp
37
- ```
38
-
39
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
40
- ```
41
- cd llama.cpp && LLAMA_CURL=1 make
42
- ```
43
-
44
- Step 3: Run inference through the main binary.
45
- ```
46
- ./llama-cli --hf-repo Novaciano/DeepSeek-R1-Distill-Qwen-1.5B-Uncensored-Q5_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-uncensored-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
47
- ```
48
- or
49
- ```
50
- ./llama-server --hf-repo Novaciano/DeepSeek-R1-Distill-Qwen-1.5B-Uncensored-Q5_K_M-GGUF --hf-file deepseek-r1-distill-qwen-1.5b-uncensored-q5_k_m-imat.gguf -c 2048
51
- ```
 
2
  library_name: transformers
3
  tags:
4
  - llama-cpp
5
+ - koboldcpp
6
+ - qwen2.5
7
+ - rp
8
+ - roleplay
9
+ - nsfw
10
+ - uncensored
11
+ - 4-bit
12
+ - 1b
13
+ - not-for-all-audiences
14
  base_model: braindao/DeepSeek-R1-Distill-Qwen-1.5B-Uncensored
15
+ license: apache-2.0
16
+ datasets:
17
+ - lemonilia/LimaRP
18
+ language:
19
+ - es
20
+ - en
21
  ---
22
 
23
+ # DeepSeek R1 Distill Qwen 1.5B LimaRP Uncensored GGUF