File size: 3,008 Bytes
09bb874
 
 
 
 
b8757a8
d98c6b5
e25c9fb
d98c6b5
8fa24cb
 
e25c9fb
8fa24cb
 
 
 
 
 
 
 
 
 
312454f
e25c9fb
 
 
 
 
 
 
 
 
 
 
 
 
d98c6b5
 
e25c9fb
 
 
d98c6b5
 
 
 
 
 
 
 
d6ae002
 
d73f03f
d6ae002
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: apache-2.0
language:
- en
---
## Model weights for Parallel Roberta-Large model ##

We provide the [weights](https://huggingface.co/luffycodes/Parallel-Roberta-Large) for the Parallel Attention and Feedforward design (PAF) for RoBERTa-Large.

To use this model, use the following [paf_modeling_roberta.py](https://github.com/luffycodes/Parallel-Transformers-Pytorch/blob/main/paf_modeling_roberta.py) file.

## Here is how to use this model to get the features of a given text in PyTorch

```python
from transformers import RobertaTokenizer
from paf_modeling_roberta import RobertaModel
tokenizer = RobertaTokenizer.from_pretrained('roberta-large')
model = RobertaModel.from_pretrained('luffycodes/parallel-roberta-large')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```

## Efficient GPU implementation
[gpu_paf_modeling_roberta.py](https://github.com/luffycodes/Parallel-Transformers-Pytorch/blob/main/gpu_paf_modeling_roberta.py) provides an efficient gpu implementation of PAF design for pytorch.

It clubs the computation of key, query, value, and first feedforward network sub-layer(intermediate) computation into one.
```
self.kqv_ffn1.weight.data = torch.cat((attention.self.key.weight.data, attention.self.query.weight.data,
                                               attention.self.value.weight.data,
                                               intermediate.dense.weight.data))
```          
However, I could not efficiently optimize the second feedforward network sub-layer computation to run in parallel.

## What is Parallel Attention and Feed-Forward Design?

![pfa (1)](https://github.com/luffycodes/Parallel-Transformers-Pytorch/assets/22951144/e5b76b1c-5fb1-4263-a23b-a61742fe12ae)

*On the left is the standard Series Attention and Feed-Forward Net Design (SAF) for transformers models. On the right is the Parallel Attention and Feed-Forward Net Design (PAF) used in transformer models like PaLM (Chowdhery et al., 2022) and Mesh-Transformers (Wang, 2021)*

## Evaluation results of [PAF-RoBERTa-Large](https://huggingface.co/luffycodes/parallel-roberta-large).

When fine-tuned on downstream tasks, this model achieves the following results:

Glue test results:

| Task | MNLI | QQP  | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE  |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
|      | 89.3 | 91.7 | 94.3 | 96.2  | 64.0 | 91.0  | 90.4 | 80.1 |

If you use this work, please cite:
Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design:
https://arxiv.org/abs/2305.13297
```
@misc{sonkar2023investigating,
      title={Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design}, 
      author={Shashank Sonkar and Richard G. Baraniuk},
      year={2023},
      eprint={2305.13297},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```