Solshine/llama-3.1-natural-farmer-16bit-Q8_0-GGUF

This model was converted to GGUF format from CopyleftCultivars/llama-3.1-natural-farmer-16bit using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Llama 3.1 Natural Farmer V1 by Copyleft Cultivars (8B)

image/png

  • Developed by: Caleb DeLeeuw (Solshine), Copyleft Cultivars (a nonprofit, protecting and preserving vulnerable plants)
  • License: Llama3.1
  • Finetuned from model : unsloth/meta-llama-3.1-8b-instruct-bnb-4bit

Using real-world user data from a previous farmer assistant chatbot service and additional curated datasets (prioritizing sustainable regenerative organic farming practices,) this LLM was iteratively fine-tuned and tested in comparison to our previous releases (Gemma 2B Natural Farmer and Mistral 7B Natural Farmer) as well as basic benchmarking. This model was then loaded onto Hugging Face Hub in hopes it will help farmers everywhere and inspire future works.

Shout out to roger j (bhugxer) for help with the dataset and training framework.

Testing and further compiling to integrate into on-device app interfaces are ongoing. This project was created by Copyleft Cultivars, a nonprofit, in partnership with Open Nutrient Project and Evergreen State College. This project serves to democratize access to farming knowledge and support the protection of vulnerable plants.

This is V1 beta. It runs locally on Ollama with some expirimental configuring so you can use it off the grid and places where internet is not accessible (ie most farms I've been on.)

This llama 3.1 model was trained with Unsloth and Huggingface's TRL library.

This is a fine tune of Llama 3.1 and inherits all use terms and licensing from the base model. Please review the original release by Meta for more details.

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Solshine/llama-3.1-natural-farmer-16bit-Q8_0-GGUF --hf-file llama-3.1-natural-farmer-16bit-q8_0.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Solshine/llama-3.1-natural-farmer-16bit-Q8_0-GGUF --hf-file llama-3.1-natural-farmer-16bit-q8_0.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Solshine/llama-3.1-natural-farmer-16bit-Q8_0-GGUF --hf-file llama-3.1-natural-farmer-16bit-q8_0.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Solshine/llama-3.1-natural-farmer-16bit-Q8_0-GGUF --hf-file llama-3.1-natural-farmer-16bit-q8_0.gguf -c 2048
Downloads last month
2
GGUF
Model size
8.03B params
Architecture
llama

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for CopyleftCultivars/llama-3.1-natural-farmer-Q8_0-GGUF

Quantized
(3)
this model

Dataset used to train CopyleftCultivars/llama-3.1-natural-farmer-Q8_0-GGUF