馃 AgriQA TinyLlama LoRA Adapter

This repository contains a LoRA adapter fine-tuned on the AgriQA dataset using the TinyLlama/TinyLlama-1.1B-Chat base model.


馃敡 Model Details


馃搶 Usage

To use this adapter, load it on top of the base model:

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat")
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat")

# Load adapter
model = PeftModel.from_pretrained(base_model, "theone049/agriqa-tinyllama-lora-adapter")

# Run inference
prompt = """### Instruction:
Answer the agricultural question.

### Input:
What is the ideal pH range for growing rice?

### Response:"""

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Evaluation results