π LegalMVP Dataset Collection
This repository contains curated U.S. legal datasets collected for building retrieval-augmented generation (RAG) and other machine learning models in the legal domain. The datasets include U.S. Codes (statutes), federal regulations, and other legal texts in multiple formats.
π Repository Structure
legalMVP/ β βββ regulations/ # Raw legal texts β βββ USCODE-2022-title15.txt β βββ USCODE-2023-title15.txt β βββ USCODE-2023-title26.txt β βββ USCODE-2023-title26.pdf β βββ ... (other titles/years) β βββ scripts/ # Data processing & download scripts β βββ fetch_regulations.py # Example: fetches 200 statutes in txt/pdf β βββ README.md # Project documentation
π Datasets Obtained
We currently have U.S. Code (statutory law) datasets for multiple years, stored as both .txt
and .pdf
:
Title 15: Commerce and Trade
USCODE-2022-title15.txt
USCODE-2023-title15.txt
Title 26: Internal Revenue Code (Tax Law)
USCODE-2023-title26.txt
USCODE-2023-title26.pdf
More titles can be added as needed.
π οΈ Data Formats
- TXT β machine-friendly plain text (ideal for preprocessing, tokenization, embeddings, and training).
- PDF β reference copies (useful for citation, legal formatting, and validation).
π― Intended Use
These datasets are intended for legal NLP research, specifically:
Retrieval-Augmented Generation (RAG):
Building retrieval pipelines to fetch relevant sections of statutes and regulations before passing them into LLMs.Fine-Tuning / Domain Adaptation:
Adapting open-source LLMs to understand statutory and regulatory language.Information Extraction:
Parsing structured knowledge from unstructured statutes.
β‘ Training Expectations
Input Size:
Legal statutes are long and verbose β chunking (e.g., 512β2048 tokens) is necessary before embeddings.Embedding Models:
Use sentence-transformers or OpenAI embedding models to index statutes for retrieval.RAG Pipelines:
Expect performance gains in precision of retrieval (correctly pulling the relevant statute sections).Evaluation Metrics:
- Retrieval: Recall@k, MRR (Mean Reciprocal Rank).
- QA: Accuracy, BLEU/ROUGE for generated answers.
π§ Next Steps
Expand Coverage
- Add more U.S. Code titles (e.g., Titles 7, 18, 42).
- Include Code of Federal Regulations (CFR) for regulatory data.
Preprocessing
- Normalize whitespace, remove headers/footers.
- Add metadata (Title, Section, Year).
Embedding + Indexing
- Build vector stores (e.g., FAISS, Weaviate, Chroma).
Model Training
- Train/evaluate RAG pipeline with legal queries.
- Fine-tune LLMs on statute-specific Q&A pairs.
π License
- The U.S. Code and federal regulations are in the public domain.
- Scripts and preprocessing logic are released under the MIT License.