GIGABATEMAN-7B-AWQ / README.md
Suparious's picture
Updated and moved existing to merged_models base_model tag in README.md
26163a2 verified
metadata
base_model: DZgas/GIGABATEMAN-7B
inference: false
language:
  - en
library_name: transformers
merged_models:
  - KatyTheCutie/LemonadeRP-4.5.3
  - LakoMoor/Silicon-Alice-7B
  - HuggingFaceH4/zephyr-7b-beta
  - Endevor/InfinityRP-v1-7B
pipeline_tag: text-generation
quantized_by: Suparious
tags:
  - 4-bit
  - AWQ
  - text-generation
  - autotrain_compatible
  - endpoints_compatible
  - mistral
  - llama
  - nsfw
  - roleplay
  - merge

DZgas/GIGABATEMAN-7B AWQ

Model Summary

If you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you

I recommend using GGUF Variant with koboldcpp (do not use GPT4ALL)

This model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one.

With the GIGABATEMAN-7B model, you can talk about everything that is usually forbidden to discuss in all other models.