legalMVP / README.md
prathyushreddy1991's picture
Update README.md
517a3b1 verified

πŸ“š LegalMVP Dataset Collection

This repository contains curated U.S. legal datasets collected for building retrieval-augmented generation (RAG) and other machine learning models in the legal domain. The datasets include U.S. Codes (statutes), federal regulations, and other legal texts in multiple formats.


πŸ“‚ Repository Structure

legalMVP/ β”‚ β”œβ”€β”€ regulations/ # Raw legal texts β”‚ β”œβ”€β”€ USCODE-2022-title15.txt β”‚ β”œβ”€β”€ USCODE-2023-title15.txt β”‚ β”œβ”€β”€ USCODE-2023-title26.txt β”‚ β”œβ”€β”€ USCODE-2023-title26.pdf β”‚ └── ... (other titles/years) β”‚ β”œβ”€β”€ scripts/ # Data processing & download scripts β”‚ └── fetch_regulations.py # Example: fetches 200 statutes in txt/pdf β”‚ └── README.md # Project documentation


πŸ“‘ Datasets Obtained

We currently have U.S. Code (statutory law) datasets for multiple years, stored as both .txt and .pdf:

  • Title 15: Commerce and Trade

    • USCODE-2022-title15.txt
    • USCODE-2023-title15.txt
  • Title 26: Internal Revenue Code (Tax Law)

    • USCODE-2023-title26.txt
    • USCODE-2023-title26.pdf

More titles can be added as needed.


πŸ› οΈ Data Formats

  • TXT β†’ machine-friendly plain text (ideal for preprocessing, tokenization, embeddings, and training).
  • PDF β†’ reference copies (useful for citation, legal formatting, and validation).

🎯 Intended Use

These datasets are intended for legal NLP research, specifically:

  • Retrieval-Augmented Generation (RAG):
    Building retrieval pipelines to fetch relevant sections of statutes and regulations before passing them into LLMs.

  • Fine-Tuning / Domain Adaptation:
    Adapting open-source LLMs to understand statutory and regulatory language.

  • Information Extraction:
    Parsing structured knowledge from unstructured statutes.


⚑ Training Expectations

  • Input Size:
    Legal statutes are long and verbose β†’ chunking (e.g., 512–2048 tokens) is necessary before embeddings.

  • Embedding Models:
    Use sentence-transformers or OpenAI embedding models to index statutes for retrieval.

  • RAG Pipelines:
    Expect performance gains in precision of retrieval (correctly pulling the relevant statute sections).

  • Evaluation Metrics:

    • Retrieval: Recall@k, MRR (Mean Reciprocal Rank).
    • QA: Accuracy, BLEU/ROUGE for generated answers.

🚧 Next Steps

  1. Expand Coverage

    • Add more U.S. Code titles (e.g., Titles 7, 18, 42).
    • Include Code of Federal Regulations (CFR) for regulatory data.
  2. Preprocessing

    • Normalize whitespace, remove headers/footers.
    • Add metadata (Title, Section, Year).
  3. Embedding + Indexing

    • Build vector stores (e.g., FAISS, Weaviate, Chroma).
  4. Model Training

    • Train/evaluate RAG pipeline with legal queries.
    • Fine-tune LLMs on statute-specific Q&A pairs.

πŸ“œ License

  • The U.S. Code and federal regulations are in the public domain.
  • Scripts and preprocessing logic are released under the MIT License.