piresramon commited on
Commit
889f197
·
verified ·
1 Parent(s): 2e1ed6c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md CHANGED
@@ -54,3 +54,39 @@ configs:
54
  - split: train
55
  path: guidelines/train-*
56
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  - split: train
55
  path: guidelines/train-*
56
  ---
57
+
58
+ # OAB-Bench
59
+ | [**Paper**](https://arxiv.org/abs/XXXX.XXXXX) | [**Code**](https://github.com/maritaca-ai/oab-bench) |
60
+
61
+ OAB-Bench is a benchmark for evaluating Large Language Models (LLMs) on legal writing tasks, specifically designed for the Brazilian Bar Examination (OAB). The benchmark comprises 105 questions across seven areas of law from recent editions of the exam.
62
+
63
+ - OAB-Bench evaluates LLMs on their ability to write legal documents and answer discursive questions
64
+ - The benchmark includes comprehensive evaluation guidelines used by human examiners
65
+ - Results show that frontier models like Claude-3.5 Sonnet can achieve passing grades (≥6.0) in most exams
66
+ - The evaluation pipeline uses LLMs as automated judges, achieving strong correlation with human scores
67
+
68
+ ## Results
69
+
70
+ Our evaluation of four LLMs on OAB-Bench shows:
71
+
72
+ | Model | Average Score | Passing Rate | Best Area |
73
+ | --- | --- | --- | --- |
74
+ | Claude-3.5 Sonnet | 7.93 | 100% | Constitutional Law (8.43) |
75
+ | GPT-4o | 6.87 | 86% | Civil Law (7.42) |
76
+ | Sabiá-3 | 6.55 | 76% | Labor Law (7.17) |
77
+ | Qwen2.5-72B | 5.21 | 24% | Administrative Law (7.00) |
78
+
79
+ The LLM judge (o1) shows strong correlation with human scores when evaluating approved exams, with Mean Absolute Error (MAE) ranging from 0.04 to 0.28 across different law areas.
80
+
81
+ ## Citation
82
+
83
+ If you find this work helpful, please cite our paper:
84
+
85
+ ```
86
+ @inproceedings{pires2025automatic,
87
+ title={Automatic Legal Writing Evaluation of LLMs},
88
+ author={Pires, Ramon and Malaquias Junior, Roseval and Nogueira, Rodrigo},
89
+ booktitle={Proceedings of the International Conference on Artificial Intelligence and Law (ICAIL)},
90
+ year={2025}
91
+ }
92
+ ```