AvocadoMuffin commited on
Commit
9734b9b
·
verified ·
1 Parent(s): 1c0afd8

Add all model files (clean copy for QA fine-tune)

Browse files
README.md CHANGED
@@ -4,8 +4,30 @@ datasets:
4
  - cuad
5
  ---
6
 
7
- # RoBERTa Base Model fine-tuned with CUAD dataset
8
- This model is the fine-tuned version of "RoBERTa Base"
9
- using CUAD dataset
10
 
11
- Link for model checkpoint: https://github.com/TheAtticusProject/cuad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - cuad
5
  ---
6
 
7
+ # Finetuned legal contract review QA model based 👩‍⚖️ 📑
 
 
8
 
9
+ Best model presented in the master thesis [*Exploring CUAD using RoBERTa span-selection QA models for legal contract review*](https://github.com/gustavhartz/transformers-legal-tasks) for QA on the Contract Understanding Atticus Dataset. Full training logic and associated thesis available through link.
10
+
11
+ Outperform the most popular HF cuad model [Rakib/roberta-base-on-cuad](https://huggingface.co/Rakib/roberta-base-on-cuad) and is the best model for CUAD on Hugging Face 26/06/2022
12
+
13
+ | **Model name** | **Top 1 Has Ans F1** | **Top 3 Has Ans F1** |
14
+ |-----------------------------------------|----------------------|----------------------|
15
+ | gustavhartz/roberta-base-cuad-finetuned | 85.68 | 94.06 |
16
+ | Rakib/roberta-base-on-cuad | 81.26 | 92.48 |
17
+
18
+
19
+ For questions etc. go through the Github repo :)
20
+
21
+ ### Citation
22
+
23
+ If you found the code of thesis helpful you can please cite it :)
24
+ ```
25
+ @thesis{ha2022,
26
+ author = {Hartz, Gustav Selfort},
27
+ title = {Exploring CUAD using RoBERTa span-selection QA models for legal contract review},
28
+ language = {English},
29
+ format = {thesis},
30
+ year = {2022},
31
+ publisher = {DTU Department of Applied Mathematics and Computer Science}
32
+ }
33
+ ```
config.json CHANGED
@@ -5,8 +5,8 @@
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "bos_token_id": 0,
 
8
  "eos_token_id": 2,
9
- "gradient_checkpointing": false,
10
  "hidden_act": "gelu",
11
  "hidden_dropout_prob": 0.1,
12
  "hidden_size": 768,
@@ -19,7 +19,8 @@
19
  "num_hidden_layers": 12,
20
  "pad_token_id": 1,
21
  "position_embedding_type": "absolute",
22
- "transformers_version": "4.4.0.dev0",
 
23
  "type_vocab_size": 1,
24
  "use_cache": true,
25
  "vocab_size": 50265
 
5
  ],
6
  "attention_probs_dropout_prob": 0.1,
7
  "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
  "eos_token_id": 2,
 
10
  "hidden_act": "gelu",
11
  "hidden_dropout_prob": 0.1,
12
  "hidden_size": 768,
 
19
  "num_hidden_layers": 12,
20
  "pad_token_id": 1,
21
  "position_embedding_type": "absolute",
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.18.0",
24
  "type_vocab_size": 1,
25
  "use_cache": true,
26
  "vocab_size": 50265
merges.txt CHANGED
@@ -1,4 +1,4 @@
1
- #version: 0.2
2
  Ġ t
3
  Ġ a
4
  h e
 
1
+ #version: 0.2 - Trained by `huggingface/tokenizers`
2
  Ġ t
3
  Ġ a
4
  h e
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:251347208d8d5bfda2eecf1fd675ac63c710977b053241d900239c8d7dd188e0
3
- size 496316087
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ada2ef8e60d963ccbacaa29d16ff771112336e3abf6f09d7d4d4caf627134037
3
+ size 496294641
special_tokens_map.json CHANGED
@@ -1 +1 @@
1
- {"bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}}
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"errors": "replace", "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "do_lower_case": false, "model_max_length": 512, "name_or_path": "roberta-base"}
 
1
+ {"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "add_prefix_space": false, "errors": "replace", "sep_token": "</s>", "cls_token": "<s>", "pad_token": "<pad>", "mask_token": "<mask>", "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "/content/drive/MyDrive/models/C10_roberta-base-100%-using-CUAD-trained-on-Only-Has-Ans-dataset", "tokenizer_class": "RobertaTokenizer"}
vocab.json CHANGED
The diff for this file is too large to render. See raw diff