---
model_creator: Nekochu
quantized_by: Nekochu
model_name: Luminia 13B v3
pretty_name: Luminia
model_type: llama2
prompt_template: >-
Below is an instruction that describes a task. Write a response that
appropriately completes the request. ### Instruction: {Instruction} {summary} ### input: {category} ### Response: {prompt}
base_model: meta-llama/Llama-2-13b-chat-hf
library_name: peft
license: apache-2.0
datasets:
- Nekochu/discord-unstable-diffusion-SD-prompts
- glaiveai/glaive-function-calling-v2
- TIGER-Lab/MathInstruct
- Open-Orca/SlimOrca
- GAIR/lima
- sahil2801/CodeAlpaca-20k
- garage-bAInd/Open-Platypus
language:
- en
pipeline_tag: text-generation
task_categories:
- question-answering
- text2text-generation
- conversational
inference: True
widget:
- example_title: prompt assistant
messages:
- role: system
content: Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
- role: user
content: "### Instruction:\nCreate stable diffusion metadata based on the given english description. Luminia\n### Input:\nfavorites and popular SFW\n### Response:\n"
output:
text: Luminia, 1girl, solo, blonde hair, long hair,
tags:
- llama-factory
- lora
- generated_from_trainer
- llama2
- llama
- instruct
- finetune
- gpt4
- synthetic data
- stable diffusion
- alpaca
- llm
model-index:
- name: Luminia-13B-v3
results: []
---
Luminia v3 is good at reasoning to enhance Stable Diffusion prompt from short summary description, may output NSFW content.
LoRa is include and Quants: exllamav2 [2.4bpw-h6](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/2.4bpw-h6), [4.25bpw-h6](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/4.25bpw-h6), [8.0bpw-h8](https://huggingface.co/Nekochu/Luminia-13B-v3/tree/8.0bpw-h8) | GGUF [Q4_K_M](https://huggingface.co/Nekochu/Luminia-13B-v3/blob/main/Luminia-13B-v3-Q4_K_M.gguf), [IQ4_NL](https://huggingface.co/Nekochu/Luminia-13B-v3/blob/main/Luminia-13B-v3-IQ4_NL.gguf) |
## Prompt template: Alpaca
Output example tested In text-generation-webui
| Input | base llama-2-chat | QLoRa |
|:---------:|:-------:|:---------:|
| [question]:
Create stable diffusion metadata based on the given english description. Luminia \n### Input:\n favorites and popular SFW | Answer:
Luminia, a mystical world of wonder and magic 🧝♀️✨ A place where technology and nature seamlessly blend together ... | Answer!
< lora:Luminari-10:0.8> Luminari, 1girl, solo, blonde hair, long hair, blue eyes, (black dress), looking at viewer, night sky, starry sky, constellation, smile, upper body, outdoors, forest, moon, tree, mountain, light particle .... |
Output prompt from QLoRa to [A1111/SD-WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui):
#### Full Prompt
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Create stable diffusion metadata based on the given english description. Luminia
### Input:
favorites and popular SFW
### Response:
```
"Luminia" can be any short description, more info on my SD dataset [here](https://huggingface.co/datasets/Nekochu/discord-unstable-diffusion-SD-prompts#dataset-description).
## Training Details
Click to see details
### Model Description
- **Train by:** [Nekochu](https://huggingface.co/Nekochu), **Model type:** Llama, **Finetuned from model [Llama-2-13b-chat](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)**
- Continue from the base of LoRA Luminia-13B-v2-QLora
Know issue: [issue]
### Trainer
- hiyouga/LLaMA-Efficient-Tuning
Hardware: QLoRA training OS Windows, Python 3.10.8, CUDA 12.1 on 24GB VRAM.
### Training hyperparameters
The following hyperparameters were used during training:
- num_epochs: 1.0
- finetuning_type: lora
- quantization_bit: 4
- stage: sft
- learning_rate: 5e-05
- cutoff_len: 4096
- num_train_epochs: 3.0
- max_samples: 100000
- warmup_steps: 0
- train_batch_size: 1
- distributed_type: single-GPU
- num_devices: 1
- warmup_steps: 0
- rope_scaling: linear
- lora_rank: 32
- lora_target: all
- lora_dropout: 0.15
- bnb_4bit_compute_dtype: bfloat16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
#### training_loss:
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.14.5
- Tokenizers 0.15.0