Update README.md with new model card content
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ tags:
|
|
| 8 |
- keras
|
| 9 |
pipeline_tag: text-classification
|
| 10 |
---
|
| 11 |
-
|
| 12 |
FNet is a set of language models published by Google as part of the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824). FNet replaces the self-attention of BERT with an unparameterized fourier transform, dramatically lowering the number of trainable parameters in the model. FNet achieves training at 92-97% accuracy of BERT counterparts on GLUE benchmark, with faster training and much smaller saved checkpoints.
|
| 13 |
|
| 14 |
Weights and Keras model code are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
|
|
@@ -138,4 +138,4 @@ classifier = keras_hub.models.FNetClassifier.from_preset(
|
|
| 138 |
preprocessor=None,
|
| 139 |
)
|
| 140 |
classifier.fit(x=features, y=labels, batch_size=2)
|
| 141 |
-
```
|
|
|
|
| 8 |
- keras
|
| 9 |
pipeline_tag: text-classification
|
| 10 |
---
|
| 11 |
+
### Model Overview
|
| 12 |
FNet is a set of language models published by Google as part of the paper [FNet: Mixing Tokens with Fourier Transforms](https://arxiv.org/abs/2105.03824). FNet replaces the self-attention of BERT with an unparameterized fourier transform, dramatically lowering the number of trainable parameters in the model. FNet achieves training at 92-97% accuracy of BERT counterparts on GLUE benchmark, with faster training and much smaller saved checkpoints.
|
| 13 |
|
| 14 |
Weights and Keras model code are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
|
|
|
|
| 138 |
preprocessor=None,
|
| 139 |
)
|
| 140 |
classifier.fit(x=features, y=labels, batch_size=2)
|
| 141 |
+
```
|