GIGABATEMAN-7B-AWQ / README.md
Suparious's picture
Update README.md
d59d315 verified
|
raw
history blame
1.26 kB
metadata
language:
  - en
library_name: transformers
tags:
  - 4-bit
  - AWQ
  - text-generation
  - autotrain_compatible
  - endpoints_compatible
  - mistral
  - llama
  - nsfw
  - roleplay
  - merge
base_model:
  - KatyTheCutie/LemonadeRP-4.5.3
  - LakoMoor/Silicon-Alice-7B
  - HuggingFaceH4/zephyr-7b-beta
  - Endevor/InfinityRP-v1-7B
pipeline_tag: text-generation
inference: false
quantized_by: Suparious

DZgas/GIGABATEMAN-7B AWQ

Model Summary

If you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you

I recommend using GGUF Variant with koboldcpp (do not use GPT4ALL)

This model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one.

With the GIGABATEMAN-7B model, you can talk about everything that is usually forbidden to discuss in all other models.