uncensored
Neko-Institute-of-Science's picture
Rename README.txt to README.md
8b3790d
|
raw
history blame
527 Bytes
metadata
datasets:
  - gozfarb/ShareGPT_Vicuna_unfiltered

Convert tools

https://github.com/practicaldreamer/vicuna_to_alpaca

Training on

https://github.com/oobabooga/text-generation-webui

ATM I'm using v4.3 of the dataset and training full context.

This LoRA is already pretty functional but far from finished training. ETA from the start 200 hours. To use this LoRA please replace the config files to ones of Vicuna and I will have them here. Other than that use normal llama then replace the config files then load LoRA.