File size: 2,019 Bytes
7820d54 30d71bf ce66860 30d71bf ce66860 30d71bf ce66860 30d71bf ce66860 30d71bf ce66860 30d71bf ce66860 11ba6a3 30d71bf 11ba6a3 ce66860 30d71bf 11ba6a3 ce66860 30d71bf 11ba6a3 ce66860 30d71bf 11ba6a3 ce66860 11ba6a3 30d71bf ce66860 30d71bf 11ba6a3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
---
language:
- en
tags:
- agriculture
- question-answering
- fine-tuning
- lora
- domain-specific
license: apache-2.0
datasets:
- agriqa
model-index:
- name: TinyLlama-LoRA-AgriQA
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: AgriQA
type: agriqa
metrics:
- type: accuracy
value: 0.78
name: Accuracy
---
# 馃 AgriQA TinyLlama LoRA Adapter
This repository contains a [LoRA](https://arxiv.org/abs/2106.09685) adapter fine-tuned on the [AgriQA](https://huggingface.co/datasets/shchoi83/agriQA) dataset using the [TinyLlama/TinyLlama-1.1B-Chat](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat) base model.
---
## 馃敡 Model Details
- **Base Model**: [`TinyLlama/TinyLlama-1.1B-Chat`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat)
- **Adapter Type**: LoRA (Low-Rank Adaptation)
- **Adapter Size**: ~4.5MB
- **Dataset**: [`shchoi83/agriQA`](https://huggingface.co/datasets/shchoi83/agriQA)
- **Language**: English
- **Task**: Instruction-tuned Question Answering in Agriculture domain
- **Trained by**: [@theone049](https://huggingface.co/theone049)
---
## 馃搶 Usage
To use this adapter, load it on top of the base model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat")
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat")
# Load adapter
model = PeftModel.from_pretrained(base_model, "theone049/agriqa-tinyllama-lora-adapter")
# Run inference
prompt = """### Instruction:
Answer the agricultural question.
### Input:
What is the ideal pH range for growing rice?
### Response:"""
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|