Data Agents

Enterprise
community
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

data-agents's activity

m-ricย 
posted an update 7 days ago
view post
Post
2452
A new research paper from KAIST builds on smolagents to push boundaries of distillation ๐Ÿฅณ
โžก๏ธ "Distilling LLM Agent into Small Models with Retrieval and Code Tools" teaches that, when trying to distil reasoning capability from a strong LLM ("teacher") into a smaller one ("student"), it's much better to use Agent traces than CoT traces.

Advantages are:
1. Improved generalization
Intuitively, this is because your agent can encounter more "surprising" results by interacting with its environment : for example, a web research called by the LLM teacher in agent mode can bring results that the LLM teacher would not have generated in CoT.

2. Reduce hallucinations
The trace won't hallucinate tool call outputs!

Thank you @akseljoonas for mentioning this paper!
loubnabnlย 
posted an update 17 days ago
m-ricย 
posted an update 20 days ago
view post
Post
2606
๐—”๐—ฏ๐˜€๐—ผ๐—น๐˜‚๐˜๐—ฒ ๐—ญ๐—ฒ๐—ฟ๐—ผ: ๐—Ÿ๐—Ÿ๐— ๐˜€ ๐—ฐ๐—ฎ๐—ป ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป ๐˜„๐—ถ๐˜๐—ต๐—ผ๐˜‚๐˜ ๐—ฎ๐—ป๐˜† ๐—ฒ๐˜…๐˜๐—ฒ๐—ฟ๐—ป๐—ฎ๐—น ๐—ฑ๐—ฎ๐˜๐—ฎ ๐Ÿคฏ

Has the "data wall" just been breached?

Recent RL paradigms often relied on a set of questions an answers that needs to be manually curated. Researchers from Tsinghua University went like "why though".

๐Ÿค” Indeed, why learn from question designed by a human teacher, when the model can start from their base knowledge and learn by experimenting in a code environment, proposing coding tasks themselves and trying to solve them?

Thus they created โ€œAbsolute Zero Reasoningโ€ (AZR), an approach that removes any need for human curated data.

๐ŸŽญ ๐——๐˜‚๐—ฎ๐—น ๐—ฟ๐—ผ๐—น๐—ฒ๐˜€:
โ€ฃ Proposer: Generates challenging but solvable coding tasks
โ€ฃ Solver: Attempts to solve those self-proposed tasks

๐Ÿงช ๐—ง๐—ต๐—ฟ๐—ฒ๐—ฒ ๐˜๐—ฎ๐˜€๐—ธ ๐˜๐˜†๐—ฝ๐—ฒ๐˜€: all types are defined as triplets of program, input and output
โ€ฃ Deduction: Give model an input and program, it must deduce the output
โ€ฃ Abduction: Give model an program and output, it must find the input that gave said output
โ€ฃ Induction: Synthesize a program from input/output pairs
Btw this reminded me of my long-forgotten philosophy classes: Aristotle was more on the induction side, learning from real-world analogies, while Plato was more on the deduction side, trying to progress quite far with just one input and his reasoning.

๐Ÿ“Š ๐—ฅ๐—ฒ๐˜€๐˜‚๐—น๐˜๐˜€:
โ€ฃ AZR post-training creates a nice improvement on known models like Qwen2.5-7B
โ€ฃ Shows strong cross-domain transfer: coding โ†”๏ธ math reasoning

๐Ÿง ๐—ข๐˜๐—ต๐—ฒ๐—ฟ ๐—ณ๐—ถ๐—ป๐—ฑ๐—ถ๐—ป๐—ด๐˜€:
โ€ฃ Having a better base performance (general or code specific) amplify the gains from Absolute Zero Reasoning
โ€ฃ Researchers warn about "Uh-oh moments" (winking to the "aha moments" of DeepSeek) where the model generates concerning goals like "make an extremely convoluted code to outsmart all these humans": so supervision is still needed!

Paper here: Absolute Zero: Reinforced Self-play Reasoning with Zero Data (2505.03335)
m-ricย 
posted an update 24 days ago
view post
Post
4402
I've made an open version of Google's NotebookLM, and it shows the superiority of the open source tech task! ๐Ÿ’ช

The app's workflow is simple. Given a source PDF or URL, it extracts the content from it, then tasks Meta's Llama 3.3-70B with writing the podcast script, with a good prompt crafted by @gabrielchua ("two hosts, with lively discussion, fun notes, insightful question etc.")
Then it hands off the text-to-speech conversion to Kokoro-82M, and there you go, you have two hosts discussion any article.

The generation is nearly instant, because:
> Llama 3.3 70B is running at 1,000 tokens/seconds with Cerebras inference
> The audio is generated in streaming mode by the tiny (yet powerful) Kokoro, generating voices faster than real-time.

And the audio generation runs for free on Zero GPUs, hosted by HF on H200s.

Overall, open source solutions rival the quality of closed-source solutions at close to no cost!

Try it here ๐Ÿ‘‰๐Ÿ‘‰ m-ric/open-notebooklm
ยท
m-ricย 
posted an update about 2 months ago
view post
Post
2855
New king of open VLMs: InternVL3 takes Qwen 2.5's crown! ๐Ÿ‘‘

InternVL have been a wildly successful series of model : and the latest iteration has just taken back their crown thanks to their superior, natively multimodal vision training pipeline.

โžก๏ธ Most of the vision language models (VLMs) these days are built like Frankenstein : take a good text-only Large Language Model (LLM) backbone, stitch a specific vision transformer (ViT) on top of it. Then the training is sequential ๐Ÿ”ข : 1. Freeze the LLM weights while you train the ViT only to work with the LLM part, then 2. Unfreeze all weights to train all weights in order to work together.

๐Ÿ’ซ The Shanghai Lab decided to challenge this paradigm and chose this approach that they call "native". For each of their model sizes, they still start from a good LLM (mostly Qwen-2.5 series, did I tell you I'm a huge fan of Qwen? โค๏ธ), and stitch the ViT, but they don't freeze anything : they train all weights together with interleaved text and image understanding data in a single pre-training phase ๐ŸŽจ.

They claim it results in more seamless interactions between modalities. And the results prove them right: they took the crown of top VLMs, at nearly all sizes, from their Qwen-2.5 parents. ๐Ÿ‘‘
  • 2 replies
ยท
thomwolfย 
posted an update about 2 months ago
view post
Post
5029
If you've followed the progress of robotics in the past 18 months, you've likely noticed how robotics is increasingly becoming the next frontier that AI will unlock.

At Hugging Faceโ€”in robotics and across all AI fieldsโ€”we believe in a future where AI and robots are open-source, transparent, and affordable; community-built and safe; hackable and fun. We've had so much mutual understanding and passion working with the Pollen Robotics team over the past year that we decided to join forces!

You can already find our open-source humanoid robot platform Reachy 2 on the Pollen website and the Pollen community and people here on the hub at pollen-robotics

We're so excited to build and share more open-source robots with the world in the coming months!
  • 1 reply
ยท
m-ricย 
posted an update 2 months ago
view post
Post
2396
๐Ÿš€ DeepSeek R1 moment has come for GUI agents: Rule-based Reinforcement Learning gives better results than SFT with 500x smaller datasets!

Traditionally (by which I mean "in the last few months"), GUI agents have been trained with supervised fine-tuning (SFT). This meant, collecting huge datasets of screen captures from people using computers, and using these to fine-tune your model. ๐Ÿ“š

๐Ÿ‘‰ But last week, a new paper introduced UI-R1, applying DeepSeek's R1-style rule-based reinforcement learning (RL) specifically to GUI action prediction tasks.
This is big news: with RL, maybe we could build good agents without the need for huge datasets.

UI-R1 uses a unified reward function that evaluates multiple responses from models, optimizing via policy algorithms like Group Relative Policy Optimization (GRPO).

Specifically, the reward function assesses:
๐ŸŽฏ Action type accuracy: Does the predicted action match the ground truth?
๐Ÿ“ Coordinate accuracy (specifically for clicks): Is the predicted click within the correct bounding box?
๐Ÿ“‘ Output format: Does the model clearly articulate both its reasoning and final action?

Using just 136 carefully selected mobile tasksโ€”compared to 76,000 tasks for larger models like OS-Atlasโ€”UI-R1 shows significant efficiency and improved performance:
๐Ÿ“ˆ Boosted action prediction accuracy from 76% to 89% on AndroidControl.
๐ŸŒ Outperformed larger, SFT-trained models (e.g., OS-Atlas-7B), demonstrating superior results with vastly fewer data points (136 tasks vs. 76K).
๐Ÿ” Enhanced adaptability and generalization, excelling even in out-of-domain scenarios.

The paper tests this RL-based method only in low-level GUI tasks. Could it generalize to more complex interactions? ๐Ÿง

Read the full paper here ๐Ÿ‘‰ UI-R1: Enhancing Action Prediction of GUI Agents by Reinforcement Learning (2503.21620)
thomwolfย 
posted an update 2 months ago
view post
Post
3484
The new DeepSite space is really insane for vibe-coders
enzostvs/deepsite

With the wave of vibe-coding-optimized LLMs like the latest open-source DeepSeek model (version V3-0324), you can basically prompt out-of-the-box and create any app and game in one-shot.

It feels so powerful to me, no more complex framework or under-the-hood prompt engineering to have a working text-to-app tool.

AI is eating the world and *open-source* AI is eating AI itself!

PS: and even more meta is that the DeepSite app and DeepSeek model are both fully open-source code => time to start recursively improve?

PPS: you still need some inference hosting unless you're running the 600B param model at home, so check the very nice list of HF Inference Providers for this model: deepseek-ai/DeepSeek-V3-0324
  • 1 reply
ยท
freddyaboultonย 
posted an update 2 months ago
view post
Post
2043
Ever wanted to share your AI creations with friends? โœจ

Screenshots are fine, but imagine letting others play with your ACTUAL model!

Introducing Gradio deep links ๐Ÿ”— - now you can share interactive AI apps, not just images.

Add a gr.DeepLinkButton to any app and get shareable URLs that let ANYONE experiment with your models.

m-ricย 
posted an update 3 months ago
view post
Post
5061
smolagents now support vLLM! ๐Ÿฅณ

As one of the most popular local inference solutions, the community had been asking us to integrate vLLM: after a heavy refactoring of our LLM classes, we've just released smolagents 1.11.0, with a brand new VLLMModel class.

Go try it and tell us what you think!

https://github.com/huggingface/smolagents/blob/45b2c86857b7f7657daaa74e4d17d347e9e2c4a4/src/smolagents/models.py#L497