This year, we started our βAI Agents and Agentic Workflowsβ series (https://www.turingpost.com/t/AI-Agents) to explore everything about AI agents step by step: all the vocabulary, how they work, and how to build them. The huge interest in this series and the large number of studies conducted on agents showed that it was one of the most popular and important themes of the year. In 2025, most likely, agents will reach new highs β we will be covering that for you. Now, letβs review the agentic systems that have emerged this year.
Here is a list of 15 agentic systems and frameworks of 2024:
Can we please do something about this? It makes everything I do so much harder, and because my local machine is so terrible, I am forced to test in production. This makes debugging so difficult. nroggendorff/system-exit
Introducing ππ π’π§πππππ‘: the best public math pre-training dataset with 50B+ tokens! HuggingFaceTB/finemath
Math remains challenging for LLMs and by training on FineMath we see considerable gains over other math datasets, especially on GSM8K and MATH.
We build the dataset by: π οΈ carefully extracting math data from Common Crawl; π iteratively filtering and recalling high quality math pages using a classifier trained on synthetic annotations to identify math reasoning and deduction.
We conducted a series of ablations comparing the performance of Llama-3.2-3B-Base after continued pre-training on FineMath and observe notable gains compared to the baseline model and other public math datasets.
We hope this helps advance the performance of LLMs on math and reasoning! π Weβre also releasing all the ablation models as well as the evaluation code.
After 6 years, BERT, the workhorse of encoder models, finally gets a replacement: πͺπ²πΉπ°πΌπΊπ² π πΌπ±π²πΏπ»πππ₯π§! π€
We talk a lot about β¨Generative AIβ¨, meaning "Decoder version of the Transformers architecture", but this is only one of the ways to build LLMs: encoder models, that turn a sentence in a vector, are maybe even more widely used in industry than generative models.
The workhorse for this category has been BERT since its release in 2018 (that's prehistory for LLMs).
It's not a fancy 100B parameters supermodel (just a few hundred millions), but it's an excellent workhorse, kind of a Honda Civic for LLMs.
Many applications use BERT-family models - the top models in this category cumulate millions of downloads on the Hub.
β‘οΈ Now a collaboration between Answer.AI and LightOn just introduced BERT's replacement: ModernBERT.
π§π;ππ₯: ποΈ Architecture changes: β First, standard modernizations: - Rotary positional embeddings (RoPE) - Replace GeLU with GeGLU, - Use Flash Attention 2 β¨ The team also introduced innovative techniques like alternating attention instead of full attention, and sequence packing to get rid of padding overhead.
π₯ As a result, the model tops the game of encoder models: It beats previous standard DeBERTaV3 for 1/5th the memory footprint, and runs 4x faster!