|
--- |
|
language: |
|
- en |
|
license: other |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- mistral |
|
- trl |
|
- biology |
|
- farming |
|
- agriculture |
|
- climate |
|
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit |
|
--- |
|
|
|
# Uploaded model |
|
|
|
- **Developed by:** Caleb DeLeeuw; Copyleft Cultivars, a nonprofit |
|
- **License:** Hippocratic 3.0 CL-Eco-Extr |
|
[](https://firstdonoharm.dev/version/3/0/cl-eco-extr.html) |
|
https://firstdonoharm.dev/version/3/0/cl-eco-extr.html |
|
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit |
|
- **Dataset Used :** CopyleftCultivars/Training-Ready_NF_chatbot_conversation_history currated from real-world agriculture and natural farming questions and the best answers from a previous POC chatbot which were then lightly editted by domain experts |
|
|
|
Using real-world user data from a previous farmer assistant chatbot service and additional curated datasets (prioritizing sustainable regenerative organic farming practices,) Gemma 2B and Mistral 7B LLMs were iteratively fine-tuned and tested against eachother as well as basic benchmarking, whereby the Gemma 2B fine-tune emerged victorious, while this Mistral fine-tune was still viable. LORA adapters were saved for each model. |
|
|
|
Shout out to roger j (bhugxer) for help with the dataset and training framework. |
|
|
|
This mistral model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |