Update README.md
Browse files
README.md
CHANGED
@@ -25,6 +25,11 @@ Results of LongT5 (transient-global attention, large-sized model) fine-tuned on
|
|
25 |
| MediaSum (4k input) | 35.54 | 19.04 | 32.20 |
|
26 |
| CNN / DailyMail (4k input) | 42.49 | 20.51 | 40.18 |
|
27 |
|
|
|
|
|
|
|
|
|
|
|
28 |
## Intended uses & limitations
|
29 |
|
30 |
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you.
|
|
|
25 |
| MediaSum (4k input) | 35.54 | 19.04 | 32.20 |
|
26 |
| CNN / DailyMail (4k input) | 42.49 | 20.51 | 40.18 |
|
27 |
|
28 |
+
| Dataset | EM | F1 |
|
29 |
+
| --- | --- | --- |
|
30 |
+
| Natural Questions (4k input) | 60.77 | 65.38 |
|
31 |
+
| Trivia QA (16k input) | 78.38 | 82.45 |
|
32 |
+
|
33 |
## Intended uses & limitations
|
34 |
|
35 |
The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=longt5) to look for fine-tuned versions on a task that interests you.
|