YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Description

This is a GPTQ 4-bit quantized version of Llama-3-Lumimaid-8B-v0.1 This was quantized using for 8192 seqlen using the AutoGPTQ wikitext2 example

This is my first quant, so I could have messed up somewhere. However, I did some testing and it looks like it's working well. by mikudev


license: cc-by-nc-4.0 tags:

  • not-for-all-audiences
  • nsfw

Lumimaid 0.1

This model uses the Llama3 prompting format

Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.

We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.

This model includes the new Luminae dataset from Ikari.

If you consider trying this model please give us some feedback either on the Community tab on hf or on our Discord Server.

Credits:

  • Undi
  • IkariDev

Description

This repo contains FP16 files of Lumimaid-8B-v0.1.

Switch: 8B - 70B - 70B-alt

Training data used:

Models used (only for 8B)

  • Initial LumiMaid 8B Finetune
  • Undi95/Llama-3-Unholy-8B-e4
  • Undi95/Llama-3-LewdPlay-8B

Prompt template: Llama3

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

Others

Undi: If you want to support us, you can here.

IkariDev: Visit my retro/neocities style website please kek

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support