luffycodes
commited on
Commit
•
d6ae002
1
Parent(s):
09bb874
Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,18 @@ language:
|
|
4 |
- en
|
5 |
---
|
6 |
## Model weights for Parallel Roberta-Large model ##
|
7 |
-
To use this model, you need to use the following [modeling_roberta.py](https://github.com/luffycodes/Parallel-Transformers-Pytorch/blob/main/paf_modeling_roberta.py) file.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- en
|
5 |
---
|
6 |
## Model weights for Parallel Roberta-Large model ##
|
7 |
+
To use this model, you need to use the following [modeling_roberta.py](https://github.com/luffycodes/Parallel-Transformers-Pytorch/blob/main/paf_modeling_roberta.py) file.
|
8 |
+
|
9 |
+
If you use this work, please cite:
|
10 |
+
Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design
|
11 |
+
https://arxiv.org/abs/2305.13297
|
12 |
+
```
|
13 |
+
@misc{sonkar2023investigating,
|
14 |
+
title={Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design},
|
15 |
+
author={Shashank Sonkar and Richard G. Baraniuk},
|
16 |
+
year={2023},
|
17 |
+
eprint={2305.13297},
|
18 |
+
archivePrefix={arXiv},
|
19 |
+
primaryClass={cs.CL}
|
20 |
+
}
|
21 |
+
```
|