Papers
arxiv:2502.19587

NeoBERT: A Next-Generation BERT

Published on Feb 26
· Submitted by tomaarsen on Feb 28
Authors:
,

Abstract

Recent innovations in architecture, pre-training, and fine-tuning have led to the remarkable in-context learning and reasoning abilities of large auto-regressive language models such as LLaMA and DeepSeek. In contrast, encoders like BERT and RoBERTa have not seen the same level of progress despite being foundational for many downstream NLP applications. To bridge this gap, we introduce NeoBERT, a next-generation encoder that redefines the capabilities of bidirectional models by integrating state-of-the-art advancements in architecture, modern data, and optimized pre-training methodologies. NeoBERT is designed for seamless adoption: it serves as a plug-and-play replacement for existing base models, relies on an optimal depth-to-width ratio, and leverages an extended context length of 4,096 tokens. Despite its compact 250M parameter footprint, it achieves state-of-the-art results on the massive MTEB benchmark, outperforming BERT large, RoBERTa large, NomicBERT, and ModernBERT under identical fine-tuning conditions. In addition, we rigorously evaluate the impact of each modification on GLUE and design a uniform fine-tuning and evaluation framework for MTEB. We release all code, data, checkpoints, and training scripts to accelerate research and real-world adoption.

Community

Paper submitter

I'm excited about this new model and architecture, and I'm especially curious about what led to the rather hugely improved performance on MTEB compared to existing models and architectures.

At first glance, it seems like the authors here did not train for as long for the MTEB Experiments. For reference, the retrieval section of MTEB(Eng, v1) corresponds with BEIR, and the authors here report scores ranging from 21.0 to 31.6 after 2000 training steps for base size BERT, RoBERTa, NomicBERT, and ModernBERT, whereas the ModernBERT paper reports scores between 37.7 to 41.6 for those same models.

It's a little surprising that they only trained with 2000 training steps, considering they used 19 different datasets.

Unfortunately, experiments on NER are missing. But I will conduct them on CoNLL-2003 :)

Paper submitter

There is no TokenClassification head currently, and some other parts of the repository don't work out of the box yet. I'm still messing around with it. To get you started, I updated the config.json with this:

  "auto_map": {
    "AutoConfig": "model.NeoBERTConfig",
    "AutoModel": "model.NeoBERTLMHead",
    "AutoModelForSequenceClassification": "model.NeoBERTForSequenceClassification"
  },

and the tokenizer_config.json with this:

  "model_input_names": [
    "input_ids",
    "attention_mask"
  ],

Thanks for that PR @tomaarsen ! I got NeoBERT (and specifically xformers) and the token classification fine-tuning running on my 5090 and will report results soon :)

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2502.19587 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2502.19587 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.