Datasets:

Modalities:
Tabular
Text
Formats:
json
Languages:
English
Size:
< 1K
Libraries:
Datasets
Dask
License:
stefan-it's picture
readme: add final overview table
f09e6a0 verified
metadata
license: apache-2.0
language:
  - en

ScandEval Results on English NLU

We use ScandEval in revision 8766d2a to conduct experiments with our pretrained FineWeb LMs.

Additionally, results for BERT, RoBERTa and ELECTRA were also performed to have a nice comparison.

Model ID Avg. Score CoNLL-En SST5 ScaLA-En SQuAD
model-garden-lms/bert-base-finewebs-1m 69.03 88.98 ± 0.43 / 88.67 ± 0.36 58.11 ± 1.2 / 59.77 ± 1.49 57.29 ± 3.57 / 77.15 ± 2.17 55.82 ± 1.35 / 66.46 ± 1.51
model-garden-lms/bert-base-finewebs-951k 69.41 89.25 ± 0.4 / 88.9 ± 0.37 58.17 ± 1.26 / 59.86 ± 1.65 58.83 ± 3.46 / 78.22 ± 2.11 55.66 ± 1.19 / 66.36 ± 1.42
model-garden-lms/bert-base-finewebs-901k 69.12 89.22 ± 0.69 / 88.97 ± 0.45 57.93 ± 1.1 / 59.49 ± 1.44 58.66 ± 2.99 / 77.94 ± 1.88 55.0 ± 1.05 / 65.75 ± 1.29
model-garden-lms/bert-base-finewebs-851k 68.76 89.29 ± 0.52 / 89.0 ± 0.51 57.68 ± 0.97 / 59.01 ± 1.23 57.11 ± 3.77 / 77.36 ± 1.97 54.79 ± 1.21 / 65.87 ± 1.32
model-garden-lms/bert-base-finewebs-801k 68.12 88.92 ± 0.45 / 88.6 ± 0.44 57.64 ± 1.09 / 60.8 ± 1.88 54.28 ± 4.83 / 75.48 ± 2.97 54.13 ± 1.61 / 65.09 ± 1.65
model-garden-lms/bert-base-token-dropping-finewebs-1m 67.66 88.68 ± 0.76 / 88.47 ± 0.62 57.4 ± 1.7 / 59.61 ± 1.6 52.72 ± 5.13 / 73.6 ± 4.42 55.04 ± 1.54 / 65.72 ± 1.75
model-garden-lms/bert-base-token-dropping-finewebs-951k 66.87 88.81 ± 0.68 / 88.64 ± 0.54 57.44 ± 1.39 / 56.85 ± 2.09 50.91 ± 5.08 / 72.22 ± 4.2 54.63 ± 1.3 / 65.43 ± 1.43
model-garden-lms/bert-base-token-dropping-finewebs-901k 68.01 88.98 ± 0.64 / 88.67 ± 0.55 57.79 ± 1.31 / 58.91 ± 1.85 54.25 ± 6.3 / 75.73 ± 3.54 54.4 ± 0.72 / 65.31 ± 1.01
model-garden-lms/bert-base-token-dropping-finewebs-851k 67.97 88.9 ± 0.7 / 88.81 ± 0.54 58.0 ± 1.02 / 58.73 ± 1.8 54.04 ± 2.61 / 74.89 ± 2.07 54.75 ± 1.08 / 65.66 ± 1.26
model-garden-lms/bert-base-token-dropping-finewebs-801k 67.8 88.95 ± 0.7 / 88.73 ± 0.58 57.71 ± 1.43 / 60.5 ± 1.69 50.95 ± 6.3 / 74.16 ± 3.2 55.24 ± 1.37 / 66.13 ± 1.24
model-garden-lms/teams-base-finewebs-1m 72.64 89.27 ± 0.41 / 88.82 ± 0.41 59.58 ± 0.64 / 62.63 ± 3.0 66.72 ± 0.94 / 83.01 ± 0.45 59.95 ± 0.71 / 71.13 ± 0.58
model-garden-lms/teams-base-finewebs-951k 72.06 89.64 ± 0.52 / 89.18 ± 0.42 60.31 ± 1.03 / 58.82 ± 2.79 65.85 ± 2.01 / 82.47 ± 1.23 59.36 ± 0.77 / 70.82 ± 0.62
model-garden-lms/teams-base-finewebs-901k 72.19 89.31 ± 0.52 / 88.71 ± 0.53 59.86 ± 1.05 / 62.17 ± 2.61 64.89 ± 2.86 / 81.84 ± 1.65 59.74 ± 0.55 / 71.0 ± 0.5
model-garden-lms/teams-base-finewebs-851k 71.41 89.48 ± 0.47 / 88.99 ± 0.52 59.17 ± 1.2 / 60.25 ± 3.25 63.01 ± 2.31 / 80.77 ± 1.38 59.13 ± 0.53 / 70.5 ± 0.49
model-garden-lms/teams-base-finewebs-801k 70.73 89.2 ± 0.43 / 88.8 ± 0.46 59.21 ± 1.5 / 61.41 ± 2.36 58.47 ± 4.1 / 78.24 ± 2.4 59.59 ± 0.66 / 70.9 ± 0.59
google-bert/bert-base-cased 62.26 87.39 ± 0.79 / 87.11 ± 0.66 54.49 ± 1.36 / 53.22 ± 1.15 52.08 ± 2.13 / 74.52 ± 1.31 38.63 ± 2.1 / 50.68 ± 1.87
google/electra-base-discriminator 69.26 87.82 ± 0.69 / 86.83 ± 0.62 62.3 ± 1.12 / 55.93 ± 0.67 62.61 ± 1.21 / 80.85 ± 0.59 52.51 ± 0.86 / 65.2 ± 0.85
FacebookAI/roberta-base 68.96 90.35 ± 0.23 / 90.14 ± 0.2 60.95 ± 1.4 / 57.52 ± 1.97 50.64 ± 1.69 / 74.55 ± 0.9 57.82 ± 1.35 / 69.68 ± 1.02