-
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper • 2203.05482 • Published • 6 -
Diverse Weight Averaging for Out-of-Distribution Generalization
Paper • 2205.09739 • Published • 1 -
Fusing finetuned models for better pretraining
Paper • 2204.03044 • Published • 5 -
Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs
Paper • 2309.07311 • Published • 2
Niels Horn
nilq
AI & ML interests
Natural language understanding, synthetic emotional speech, mechanistic interpretability.
Organizations
Collections
4
Papers
1
models
16
nilq/baby-python-mistral-1L-tiny-TinyStories-ft
Text Generation
•
Updated
•
27
•
1
nilq/baby-python-mistral-1L-tiny-lua-ft
Text Generation
•
Updated
•
24
nilq/baby-python-1L-mistral-lua-stories-slerp
Text Generation
•
Updated
•
31
nilq/baby-python-mistral-1L-tiny-base
Text Generation
•
Updated
•
33
nilq/lua-stories-slerp-mistral-1L-tiny
Text Generation
•
Updated
•
13
nilq/lua-stories-slerp-mistral-2L-tiny
Text Generation
•
Updated
•
15
nilq/mistral-2L-tiny
Text Generation
•
Updated
•
26
nilq/lua-stories-linear-mistral-1L-tiny
Text Generation
•
Updated
•
13
nilq/python-mistral-1L-mini
Text Generation
•
Updated
•
16
nilq/mistral-1L-tiny
Text Generation
•
Updated
•
549
•
5
datasets
9
nilq/baby-python-and-tiny-stories-and-lua
Viewer
•
Updated
•
12.3M
•
37
nilq/baby-python-and-lua
Viewer
•
Updated
•
12.3M
•
53
•
1
nilq/baby-python-and-tiny-stories
Viewer
•
Updated
•
13.9M
•
47
nilq/python-and-tiny-stories
Updated
•
5
nilq/baby-python
Viewer
•
Updated
•
11.7M
•
75
•
1
nilq/small-lua-stack
Viewer
•
Updated
•
559k
•
54
•
2
nilq/small-python-stack
Viewer
•
Updated
•
2.59M
•
63
nilq/babylm-100M
Viewer
•
Updated
•
12.7M
•
55
nilq/babylm-10M
Viewer
•
Updated
•
3.14M
•
84