GIGABATEMAN-7B-GGUF / README.md
DZgas's picture
Update README.md
a99c3ed verified
|
raw
history blame
No virus
664 Bytes
---
language:
- en
pipeline_tag: text-generation
tags:
- quantized
- 2-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text2text-generation
- mistral
- roleplay
- merge
base_model:
- KatyTheCutie/LemonadeRP-4.5.3
- LakoMoor/Silicon-Alice-7B
- Endevor/InfinityRP-v1-7B
- HuggingFaceH4/zephyr-7b-beta
model_name: GIGABATEMAN-7B
model_creator: DZgas
quantized_by: DZgas
---
<img src="logo.png">
This is a GGUF variant of <a href=https://huggingface.co/DZgas/GIGABATEMAN-7B?not-for-all-audiences&#61;true>GIGABATEMAN-7B</a> model. Use with <a href=https://github.com/LostRuins/koboldcpp/releases>koboldcpp</a> (do not use GPT4ALL)
The most UNcensored model that I know.