🏛️ Legal QA Model (T5-based, Indian Law)
This is a fine-tuned version of the T5 model for Question Answering in the Indian Legal domain. It was trained on curated QA samples based on Indian laws, including statutes such as IPC, CrPC, and constitutional provisions.
The model is designed to provide accurate and context-aware answers for questions grounded in Indian legal texts.
🔍 Model Details
- Architecture: T5
- Base model:
t5-base
- Task: Question Answering (QA)
- Domain: Indian Legal System
- Input format:
question: context:
📦 Files Included
File Name | Description |
---|---|
model.safetensors |
Fine-tuned model weights (Git LFS) |
config.json |
Model configuration |
tokenizer_config.json |
Tokenizer configuration |
spiece.model |
SentencePiece tokenizer model |
added_tokens.json |
Additional token definitions |
special_tokens_map.json |
Special token mapping |
generation_config.json |
Generation hyperparameters |
📊 Intended Use:
🔎 Question Answering over Indian Legal texts
📜 Legal research tools and assistants
🎓 Educational tools for law students
Not Recommended For: General-purpose QA beyond the legal domain
Use as a substitute for professional legal advice
🧠 Training Details Dataset: Indian legal QA samples (IPC, CrPC, Constitution, etc.)
Model: Fine-tuned t5-base
Input length: 512 tokens
Output length: 128 tokens
Hardware: Google Colab (T4 GPU)
✅ License This model is released under the Apache-2.0 License. You are free to use, modify, and distribute it with attribution.
🤝 Citation / Credit If you use this model in your research or application, please consider citing:
@model{legal_qa_indian_t5, author = {Harsh Upadhyay}, title = {Legal QA Model using T5 (Indian Law)}, year = {2025}, publisher = {Hugging Face}, howpublished = {\url{https://huggingface.co/TheGod-2003/legal_QA_model}} }
🧾 How to Use
You can load and use this model directly using the Hugging Face transformers
library:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("TheGod-2003/legal_QA_model")
tokenizer = AutoTokenizer.from_pretrained("TheGod-2003/legal_QA_model")
input_text = "question: What is the punishment for theft? context: Section 378 of IPC defines theft as..."
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 231
Model tree for TheGod-2003/legal_QA_model
Base model
google-t5/t5-base