luffycodes commited on
Commit
d98c6b5
1 Parent(s): d73f03f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -4,7 +4,20 @@ language:
4
  - en
5
  ---
6
  ## Model weights for Parallel Roberta-Large model ##
7
- To use this model, you need to use the following [modeling_roberta.py](https://github.com/luffycodes/Parallel-Transformers-Pytorch/blob/main/paf_modeling_roberta.py) file.
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
  If you use this work, please cite:
10
  Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design:
 
4
  - en
5
  ---
6
  ## Model weights for Parallel Roberta-Large model ##
7
+
8
+ We provide the [weights](https://huggingface.co/luffycodes/Parallel-Roberta-Large) for the parallel attention and feedforward design for Roberta-Large.
9
+
10
+ ![pfa (1)](https://github.com/luffycodes/Parallel-Transformers-Pytorch/assets/22951144/e5b76b1c-5fb1-4263-a23b-a61742fe12ae)
11
+
12
+ ## Evaluation results
13
+
14
+ When fine-tuned on downstream tasks, this model achieves the following results:
15
+
16
+ Glue test results:
17
+
18
+ | Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
19
+ |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
20
+ | | 89.3 | 91.7 | 94.3 | 96.2 | 64.0 | 91.0 | 90.4 | 80.1 |
21
 
22
  If you use this work, please cite:
23
  Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design: