Novaciano commited on
Commit
0c23624
·
verified ·
1 Parent(s): 7ff4ab0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -41
README.md CHANGED
@@ -1,5 +1,7 @@
1
  ---
2
  base_model: Novaciano/SibilaDeCumas-1.1B
 
 
3
  datasets:
4
  - OEvortex/vortex-mini
5
  - Nitral-AI/Reddit-NSFW-Writing_Prompts_ShareGPT
@@ -15,49 +17,16 @@ tags:
15
  - tinyllama
16
  - 1.1b
17
  - llama-cpp
18
- - gguf-my-repo
19
  ---
20
 
21
- # Novaciano/SibilaDeCumas-1.1B-IQ4_XS-GGUF
22
- This model was converted to GGUF format from [`Novaciano/SibilaDeCumas-1.1B`](https://huggingface.co/Novaciano/SibilaDeCumas-1.1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
23
- Refer to the [original model card](https://huggingface.co/Novaciano/SibilaDeCumas-1.1B) for more details on the model.
24
-
25
- ## Use with llama.cpp
26
- Install llama.cpp through brew (works on Mac and Linux)
27
-
28
- ```bash
29
- brew install llama.cpp
30
-
31
- ```
32
- Invoke the llama.cpp server or the CLI.
33
-
34
- ### CLI:
35
- ```bash
36
- llama-cli --hf-repo Novaciano/SibilaDeCumas-1.1B-IQ4_XS-GGUF --hf-file sibiladecumas-1.1b-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
37
- ```
38
-
39
- ### Server:
40
- ```bash
41
- llama-server --hf-repo Novaciano/SibilaDeCumas-1.1B-IQ4_XS-GGUF --hf-file sibiladecumas-1.1b-iq4_xs-imat.gguf -c 2048
42
- ```
43
-
44
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
45
 
46
- Step 1: Clone llama.cpp from GitHub.
47
- ```
48
- git clone https://github.com/ggerganov/llama.cpp
49
- ```
50
 
51
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
52
- ```
53
- cd llama.cpp && LLAMA_CURL=1 make
54
- ```
55
 
56
- Step 3: Run inference through the main binary.
57
- ```
58
- ./llama-cli --hf-repo Novaciano/SibilaDeCumas-1.1B-IQ4_XS-GGUF --hf-file sibiladecumas-1.1b-iq4_xs-imat.gguf -p "The meaning to life and the universe is"
59
- ```
60
- or
61
- ```
62
- ./llama-server --hf-repo Novaciano/SibilaDeCumas-1.1B-IQ4_XS-GGUF --hf-file sibiladecumas-1.1b-iq4_xs-imat.gguf -c 2048
63
- ```
 
1
  ---
2
  base_model: Novaciano/SibilaDeCumas-1.1B
3
+ ---
4
+ base_model: Novaciano/SibilaDeCumas-1.1B
5
  datasets:
6
  - OEvortex/vortex-mini
7
  - Nitral-AI/Reddit-NSFW-Writing_Prompts_ShareGPT
 
17
  - tinyllama
18
  - 1.1b
19
  - llama-cpp
 
20
  ---
21
 
22
+ # SibilaDeCumas-1.1B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
+ <center>
25
+ <img src="https://historia.nationalgeographic.com.es/medio/2023/03/15/la-sibila-de-cumas-oleo-por-domenichino-1617-museos-capitolinos-roma_6d6a13bb_230315164741_550x773.jpg" alt="IMG-20250313-213616" border="0">
26
+ </center>
 
27
 
28
+ ## Koboldcpp
 
 
 
29
 
30
+ <center>
31
+ <img src="https://i.ibb.co/s9DPmcp7/IMG-20250313-213616.jpg" alt="IMG-20250313-213616" border="0">
32
+ </center>