GlobalMeltdown commited on
Commit
84e3d3e
1 Parent(s): 916b9fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -41
README.md CHANGED
@@ -6,49 +6,27 @@ tags:
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
 
 
 
 
 
 
 
 
 
9
  ---
 
 
 
 
10
 
11
- # GlobalMeltdown/MaidenlessNoMore-7B-GGUF
12
- This model was converted to GGUF format from [`GlobalMeltdown/MaidenlessNoMore-7B`](https://huggingface.co/GlobalMeltdown/MaidenlessNoMore-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
- Refer to the [original model card](https://huggingface.co/GlobalMeltdown/MaidenlessNoMore-7B) for more details on the model.
14
-
15
-
16
- ## Use with llama.cpp
17
- Install llama.cpp through brew (works on Mac and Linux)
18
-
19
- ```bash
20
- brew install llama.cpp
21
-
22
- ```
23
- Invoke the llama.cpp server or the CLI.
24
 
25
- ### CLI:
26
- ```bash
27
- llama-cli --hf-repo GlobalMeltdown/MaidenlessNoMore-7B-GGUF --hf-file MaidenlessNoMore-7B-q4_k_m.gguf -p "The meaning to life and the universe is"
28
- ```
29
 
30
- ### Server:
31
- ```bash
32
- llama-server --hf-repo GlobalMeltdown/MaidenlessNoMore-7B-GGUF --hf-file MaidenlessNoMore-7B-q4_k_m.gguf -c 2048
33
- ```
34
 
35
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
36
-
37
- Step 1: Clone llama.cpp from GitHub.
38
- ```
39
- git clone https://github.com/ggerganov/llama.cpp
40
- ```
41
-
42
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
43
- ```
44
- cd llama.cpp && LLAMA_CURL=1 make
45
- ```
46
-
47
- Step 3: Run inference through the main binary.
48
- ```
49
- ./llama-cli --hf-repo GlobalMeltdown/MaidenlessNoMore-7B-GGUF --hf-file MaidenlessNoMore-7B-q4_k_m.gguf -p "The meaning to life and the universe is"
50
- ```
51
- or
52
- ```
53
- ./llama-server --hf-repo GGlobalMeltdown/MaidenlessNoMore-7B-GGUF --hf-file MaidenlessNoMore-7B-q4_k_m.gguf -c 2048
54
- ```
 
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
9
+ - Roleplay
10
+ - RP
11
+ - Chat
12
+ - text-generation-inference
13
+ - 'merge '
14
+ - text generation
15
+ license: cc-by-4.0
16
+ language:
17
+ - en
18
  ---
19
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65bbcee1320702b1043ef8ae/9OPS0wrdkzksmyuM6Nxdu.png)
20
+ MaidenlessNoMore-7B-GGUF was my first attempt at merging an LLM
21
+ I decided to use one of the first models I really enjoyed that not many people know of:
22
+ https://huggingface.co/cookinai/Valkyrie-V1 with my other favorite model that has been my fallback model for a long time: https://huggingface.co/SanjiWatsuki/Kunoichi-7B
23
 
24
+ This was more of an experiment than anything else. Hopefully this will lead to some more interesting merges and who knows what else in the future.
25
+ I mean we have to start somewhere right?
 
 
 
 
 
 
 
 
 
 
 
26
 
27
+ Alpaca or Alpaca roleplay is recommended.
 
 
 
28
 
 
 
 
 
29
 
30
+ # GlobalMeltdown/MaidenlessNoMore-7B-GGUF
31
+ This model was converted to GGUF format from [`GlobalMeltdown/MaidenlessNoMore-7B`](https://huggingface.co/GlobalMeltdown/MaidenlessNoMore-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
32
+ Refer to the [original model card](https://huggingface.co/GlobalMeltdown/MaidenlessNoMore-7B) for more details on the model.