theone049's picture
Update README.md
7820d54 verified
metadata
language:
  - en
tags:
  - agriculture
  - question-answering
  - fine-tuning
  - lora
  - domain-specific
license: apache-2.0
datasets:
  - agriqa
model-index:
  - name: TinyLlama-LoRA-AgriQA
    results:
      - task:
          type: question-answering
          name: Question Answering
        dataset:
          name: AgriQA
          type: agriqa
        metrics:
          - type: accuracy
            value: 0.78
            name: Accuracy

馃 AgriQA TinyLlama LoRA Adapter

This repository contains a LoRA adapter fine-tuned on the AgriQA dataset using the TinyLlama/TinyLlama-1.1B-Chat base model.


馃敡 Model Details


馃搶 Usage

To use this adapter, load it on top of the base model:

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat")
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat")

# Load adapter
model = PeftModel.from_pretrained(base_model, "theone049/agriqa-tinyllama-lora-adapter")

# Run inference
prompt = """### Instruction:
Answer the agricultural question.

### Input:
What is the ideal pH range for growing rice?

### Response:"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))