Uploaded model
- Developed by: pacozaa
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-bnb-4bit
- This is LoRA Adapter : Train with liyucheng/ShareGPT90K - the step of training is increasing over time since I am fine-tuning in Colab. Right now it's at 550 step.
Ollama
- Model Page https://ollama.com/pacozaa/mistralsharegpt90
ollama run pacozaa/mistralsharegpt90
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Model tree for pacozaa/mistral-sharegpt90k
Base model
unsloth/mistral-7b-bnb-4bit