VGreatVig07's picture
Update README.md
e21f9ea verified
metadata
license: apache-2.0
base_model:
  - microsoft/Phi-3-mini-4k-instruct
tags:
  - gguf
  - phi3
  - finetuned
  - llama.cpp
  - ollama
  - legal-assistant
language:
  - en

🧠 Phi-3 Mini Fine-Tuned (GGUF) β€” Legal Assistant

This is a LoRA fine-tuned version of microsoft/phi-3-mini-4k-instruct converted to GGUF format for use with llama.cpp, Ollama, or compatible runtimes.

It was trained on legal documents to act as a context-aware legal assistant that can answer questions from uploaded contracts and policies.

πŸ”§ Model Details

  • Base model: microsoft/phi-3-mini-4k-instruct
  • Fine-tuned with: LoRA (PEFT) + TRLL SFTTrainer
  • Converted to GGUF using: convert_hf_to_gguf.py from llama.cpp

πŸ›  How to Use

πŸ” With llama.cpp

./main -m phi3-finetuned.gguf -p "What rights does this contract give me?"

πŸ” With 🐍 With Python + llama-cpp-python

from llama_cpp import Llama

llm = Llama(model_path="phi3-finetuned.gguf")
output = llm("Summarize the terms of this agreement.")
print(output)

πŸ” With πŸ€– With Ollama (if merged)

ollama create phi3-legal -f Modelfile
ollama run phi3-legal

🧾 Use Cases

This fine-tuned model is intended for legal document analysis and Q&A applications.

Example questions it can answer:

  • "Can this agreement be terminated without prior notice?"
  • "Do I have refund rights under this policy?"
  • "What are the obligations mentioned in clause 3?"
  • "Is there an arbitration clause in this contract?"

It is designed to provide helpful, non-legal-advice explanations by summarizing and interpreting clauses based on the uploaded text context.


πŸ“ Files

File Description
phi3-finetuned.gguf The GGUF format model file for inference
README.md Description and usage guide (this file)
Modelfile (optional) Ollama model recipe (if you use Ollama)

🧠 Credits

  • Project: DocuAnalyzer AI
  • Author: Vighnesh M S (@VGreatVig07)
  • Fine-tuning: Performed using Hugging Face transformers, trl, and PEFT (LoRA)
  • Conversion: Model converted to .gguf format using llama.cpp's convert_hf_to_gguf.py

Thanks to open-source contributions from:

  • Microsoft (Phi-3 base model)
  • Hugging Face ecosystem
  • llama.cpp team