Novaciano commited on
Commit
fbc3614
·
verified ·
1 Parent(s): 46cb57b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -41
README.md CHANGED
@@ -18,46 +18,10 @@ tags:
18
  - gguf-my-repo
19
  ---
20
 
21
- # Novaciano/SibilaDeCumas-1.1B-Q6_K-GGUF
22
- This model was converted to GGUF format from [`Novaciano/SibilaDeCumas-1.1B`](https://huggingface.co/Novaciano/SibilaDeCumas-1.1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
23
- Refer to the [original model card](https://huggingface.co/Novaciano/SibilaDeCumas-1.1B) for more details on the model.
24
 
25
- ## Use with llama.cpp
26
- Install llama.cpp through brew (works on Mac and Linux)
27
 
28
- ```bash
29
- brew install llama.cpp
30
-
31
- ```
32
- Invoke the llama.cpp server or the CLI.
33
-
34
- ### CLI:
35
- ```bash
36
- llama-cli --hf-repo Novaciano/SibilaDeCumas-1.1B-Q6_K-GGUF --hf-file sibiladecumas-1.1b-q6_k.gguf -p "The meaning to life and the universe is"
37
- ```
38
-
39
- ### Server:
40
- ```bash
41
- llama-server --hf-repo Novaciano/SibilaDeCumas-1.1B-Q6_K-GGUF --hf-file sibiladecumas-1.1b-q6_k.gguf -c 2048
42
- ```
43
-
44
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
45
-
46
- Step 1: Clone llama.cpp from GitHub.
47
- ```
48
- git clone https://github.com/ggerganov/llama.cpp
49
- ```
50
-
51
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
52
- ```
53
- cd llama.cpp && LLAMA_CURL=1 make
54
- ```
55
-
56
- Step 3: Run inference through the main binary.
57
- ```
58
- ./llama-cli --hf-repo Novaciano/SibilaDeCumas-1.1B-Q6_K-GGUF --hf-file sibiladecumas-1.1b-q6_k.gguf -p "The meaning to life and the universe is"
59
- ```
60
- or
61
- ```
62
- ./llama-server --hf-repo Novaciano/SibilaDeCumas-1.1B-Q6_K-GGUF --hf-file sibiladecumas-1.1b-q6_k.gguf -c 2048
63
- ```
 
18
  - gguf-my-repo
19
  ---
20
 
21
+ # SibilaDeCumas-1.1B
 
 
22
 
23
+ ## Koboldcpp
 
24
 
25
+ <center>
26
+ <img src="https://i.ibb.co/s9DPmcp7/IMG-20250313-213616.jpg" alt="IMG-20250313-213616" border="0">
27
+ </center>