File size: 1,328 Bytes
df6885a
916b9fc
df6885a
 
 
 
 
 
84e3d3e
 
 
 
 
 
 
 
 
df6885a
84e3d3e
 
 
 
df6885a
84e3d3e
 
df6885a
84e3d3e
df6885a
 
84e3d3e
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
base_model: MaidenlessNoMore-7B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
- Roleplay
- RP
- Chat
- text-generation-inference
- 'merge '
- text generation
license: cc-by-4.0
language:
- en
---
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65bbcee1320702b1043ef8ae/9OPS0wrdkzksmyuM6Nxdu.png)
MaidenlessNoMore-7B-GGUF was my first attempt at merging an LLM
I decided to use one of the first models I really enjoyed that not many people know of: 
https://huggingface.co/cookinai/Valkyrie-V1 with my other favorite model that has been my fallback model for a long time: https://huggingface.co/SanjiWatsuki/Kunoichi-7B 

This was more of an experiment than anything else. Hopefully this will lead to some more interesting merges and who knows what else in the future.
I mean we have to start somewhere right?

Alpaca or Alpaca roleplay is recommended.


# GlobalMeltdown/MaidenlessNoMore-7B-GGUF
This model was converted to GGUF format from [`GlobalMeltdown/MaidenlessNoMore-7B`](https://huggingface.co/GlobalMeltdown/MaidenlessNoMore-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/GlobalMeltdown/MaidenlessNoMore-7B) for more details on the model.