luffycodes's picture
Update README.md
d98c6b5
|
raw
history blame
1.22 kB
metadata
license: apache-2.0
language:
  - en

Model weights for Parallel Roberta-Large model

We provide the weights for the parallel attention and feedforward design for Roberta-Large.

pfa (1)

Evaluation results

When fine-tuned on downstream tasks, this model achieves the following results:

Glue test results:

Task MNLI QQP QNLI SST-2 CoLA STS-B MRPC RTE
89.3 91.7 94.3 96.2 64.0 91.0 90.4 80.1

If you use this work, please cite: Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design: https://arxiv.org/abs/2305.13297

@misc{sonkar2023investigating,
      title={Investigating the Role of Feed-Forward Networks in Transformers Using Parallel Attention and Feed-Forward Net Design}, 
      author={Shashank Sonkar and Richard G. Baraniuk},
      year={2023},
      eprint={2305.13297},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}