Salhan et al (2024) Multilingual BabyLMs trained on CHILDES corpora.

CLIMB-MAO
AI & ML interests
None defined yet.
Recent Activity
Organization Card
Less is More: Pre-Training Cross-Lingual Small-Scale Language Models with Cognitively-Plausible Curriculum Learning Strategies. Available from: https://arxiv.org/abs/2410.22886.
Salhan et al (2024) creates age-ordered corpora of Child-Directed Speech for four typologically distant language families to implement SSLMs and acquisition-inspired curricula cross-lingually.
The MAO-CHILDES dataset contains extract orthographic datasets for French, German, Japanese and Chinese and several other lower-resource languages. It is part of a wider effort for cognitively-inspired pretraining using resources from Language Acquistiion.
You can also find pretrained BabyLMs for French, German, Japanese and Chinese with three different cognitively-inspired curriculum learning in the branches of each language-specific BabyLM.
Collections
1
models
22

climb-mao/french-childes-curricula
Updated

climb-mao/chinese-childes-curricula
Updated

climb-mao/spanish-childes-curricula
Updated

climb-mao/german-childes-curricula
Updated

climb-mao/portuguese-childes-curricula
Updated

climb-mao/english-childes-curricula
Updated

climb-mao/japanese-childes-curricula
Updated

climb-mao/dutch-childes-curricula
Updated

climb-mao/RON-CamBabyTokenizer
Updated

climb-mao/CAT-CamBabyTokenizer
Updated