Edit model card

80% 1x4 Block Sparse BERT-Large (uncased) Prune OFA

This model is was created using Prune OFA method described in Prune Once for All: Sparse Pre-Trained Language Models presented in ENLSP NeurIPS Workshop 2021.

For further details on the model and its result, see our paper and our implementation available here.

Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train Intel/bert-large-uncased-sparse-80-1x4-block-pruneofa

Collection including Intel/bert-large-uncased-sparse-80-1x4-block-pruneofa