MayaGalvez commited on
Commit
d6e7829
·
1 Parent(s): e4dc31c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ This dataset obtains genealogical and typological information for the 104 languages used for pre-training of the language model multilingual BERT (Devlin et al., 2019).
2
+ The genealogical information covers the language family and the genus for each language.
3
+ For typological description of the pre-training languages, 36 features from WALS (Dryer & Haspelmath, 2013) were used.
4
+
5
+ The information provided here can be used, among other things, to investigate how the pre-training corpus is structured from a genealogical and typological perspective and to what extent, if any, this structure is related to the performance of the language model.
6
+
7
+ In addition to the table of linguistic features, a pdf file was uploaded listing all the grammars and language descriptive materials used to compile the linguistic information.