馃 AgriQA TinyLlama LoRA Adapter
This repository contains a LoRA adapter fine-tuned on the AgriQA dataset using the TinyLlama/TinyLlama-1.1B-Chat base model.
馃敡 Model Details
- Base Model:
TinyLlama/TinyLlama-1.1B-Chat
- Adapter Type: LoRA (Low-Rank Adaptation)
- Adapter Size: ~4.5MB
- Dataset:
shchoi83/agriQA
- Language: English
- Task: Instruction-tuned Question Answering in Agriculture domain
- Trained by: @theone049
馃搶 Usage
To use this adapter, load it on top of the base model:
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat")
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat")
# Load adapter
model = PeftModel.from_pretrained(base_model, "theone049/agriqa-tinyllama-lora-adapter")
# Run inference
prompt = """### Instruction:
Answer the agricultural question.
### Input:
What is the ideal pH range for growing rice?
### Response:"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support