PEFT
Safetensors
English

Github link - https://github.com/rejzzzz/NyayMitra

LLaMA 3.2 - Fine-tuned on Indian Law Dataset (QLoRA)

This model is a fine-tuned version of Meta-LLaMA 3.2 3B using QLoRA on Indian Law datasets.
It is designed to assist in legal question answering, case law summarization, and other NLP tasks in the Indian legal domain.

Base Model

Fine-tuning Approach

  • QLoRA using PEFT (Parameter Efficient Fine-Tuning)
  • Trained on AWS SageMaker

Dataset Used

  1. Indian Law Dataset
    viber1/indian-law-dataset

  2. LLM Fine Tuning Dataset of Indian Legal Texts
    Kaggle Dataset

Training Details

  • Framework: PyTorch, Transformers
  • Hardware: AWS SageMaker (ml.g5.xlarge instance - 24gb RAM and 100gb EBS Volume)
  • Epochs: 3
  • Learning Rate: -
  • LoRA Rank: -

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "your-username/llama-3.2-indianlaw-lora"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "What is Article 21 of the Indian Constitution?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rej06/NyayMitra

Adapter
(442)
this model

Dataset used to train rej06/NyayMitra