File size: 1,130 Bytes
9402b49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d8b2c3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: apache-2.0
pretty_name: Luminia
model_type: llama2
tags:
  - llama-factory
  - lora
  - generated_from_trainer
  - llama2
  - llama
  - instruct
  - finetune
  - gpt4
  - synthetic data
  - stable diffusion
  - alpaca
  - llm
datasets:
  - Nekochu/discord-unstable-diffusion-SD-prompts
  - Nekochu/Luminia-mixture
  - AstraMindAI/RLAIF-Nectar
  - hiyouga/DPO-En-Zh-20k
---
Training resume from [Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3).
<!-- [05/24] This should include all dataset from LLaMA-Factory, and more.-->

# Luminia-v4 Lora only
Luminia-13B-v4-QLora-sft (rank 32) barelly can handle new [Luminia-mixture](https://huggingface.co/datasets/Nekochu/Luminia-mixture)
and [ExtendedPrompts](https://huggingface.co/datasets/Nekochu/discord-unstable-diffusion-SD-prompts) should give more flexible when ask prompt, e.g.:
```
### Instruction:
Create stable diffusion prompt based on the given english description.

### Input:
City street, night, raining, drone shot, cyberpunk

### Response:
```

And so Stage-B DPO: I do **NOT** recommend use QLora-orpo, poor lora failed to learn more **.** :<