WMT: Workshop on Statistical Machine Translation

non-profit

AI & ML interests

machine translation

Recent Activity

wmt's activity

albertvillanovaΒ 
posted an update 6 days ago
albertvillanovaΒ 
posted an update 18 days ago
view post
Post
2394
New in smolagents v1.16.0:
πŸ” Bing support in WebSearchTool
🐍 Custom functions & executor_kwargs in LocalPythonExecutor
πŸ”§ Streaming GradioUI fixes
🌐 Local web agents via api_base & api_key
πŸ“š Better docs

πŸ‘‰ https://github.com/huggingface/smolagents/releases/tag/v1.16.0
albertvillanovaΒ 
posted an update about 1 month ago
view post
Post
2729
smolagents v1.14.0 is out! πŸš€
πŸ”Œ MCPClient: A sleek new client for connecting to remote MCP servers, making integrations more flexible and scalable.
πŸͺ¨ Amazon Bedrock: Native support for Bedrock-hosted models.
SmolAgents is now more powerful, flexible, and enterprise-ready. πŸ’Ό

Full release πŸ‘‰ https://github.com/huggingface/smolagents/releases/tag/v1.14.0
#smolagents #LLM #AgenticAI
albertvillanovaΒ 
posted an update 3 months ago
view post
Post
4094
πŸš€ New smolagents update: Safer Local Python Execution! 🦾🐍

With the latest release, we've added security checks to the local Python interpreter: every evaluation is now analyzed for dangerous builtins, modules, and functions. πŸ”’

Here's why this matters & what you need to know! πŸ§΅πŸ‘‡

1️⃣ Why is local execution risky? ⚠️
AI agents that run arbitrary Python code can unintentionally (or maliciously) access system files, run unsafe commands, or exfiltrate data.

2️⃣ New Safety Layer in smolagents πŸ›‘οΈ
We now inspect every return value during execution:
βœ… Allowed: Safe built-in types (e.g., numbers, strings, lists)
β›” Blocked: Dangerous functions/modules (e.g., os.system, subprocess, exec, shutil)

3️⃣ Immediate Benefits πŸ’‘
- Prevent agents from accessing unsafe builtins
- Block unauthorized file or network access
- Reduce accidental security vulnerabilities

4️⃣ Security Disclaimer ⚠️
🚨 Despite these improvements, local Python execution is NEVER 100% safe. 🚨
If you need true isolation, use a remote sandboxed executor like Docker or E2B.

5️⃣ The Best Practice: Use Sandboxed Execution πŸ”
For production-grade AI agents, we strongly recommend running code in a Docker or E2B sandbox to ensure complete isolation.

6️⃣ Upgrade Now & Stay Safe! πŸš€
Check out the latest smolagents release and start building safer AI agents today.

πŸ”— https://github.com/huggingface/smolagents

What security measures do you take when running AI-generated code? Let’s discuss! πŸ‘‡

#AI #smolagents #Python #Security
  • 2 replies
Β·
albertvillanovaΒ 
posted an update 3 months ago
view post
Post
3997
πŸš€ Big news for AI agents! With the latest release of smolagents, you can now securely execute Python code in sandboxed Docker or E2B environments. πŸ¦ΎπŸ”’

Here's why this is a game-changer for agent-based systems: πŸ§΅πŸ‘‡

1️⃣ Security First πŸ”
Running AI agents in unrestricted Python environments is risky! With sandboxing, your agents are isolated, preventing unintended file access, network abuse, or system modifications.

2️⃣ Deterministic & Reproducible Runs πŸ“¦
By running agents in containerized environments, you ensure that every execution happens in a controlled and predictable settingβ€”no more environment mismatches or dependency issues!

3️⃣ Resource Control & Limits 🚦
Docker and E2B allow you to enforce CPU, memory, and execution time limits, so rogue or inefficient agents don’t spiral out of control.

4️⃣ Safer Code Execution in Production 🏭
Deploy AI agents confidently, knowing that any generated code runs in an ephemeral, isolated environment, protecting your host machine and infrastructure.

5️⃣ Easy to Integrate πŸ› οΈ
With smolagents, you can simply configure your agent to use Docker or E2B as its execution backendβ€”no need for complex security setups!

6️⃣ Perfect for Autonomous AI Agents πŸ€–
If your AI agents generate and execute code dynamically, this is a must-have to avoid security pitfalls while enabling advanced automation.

⚑ Get started now: https://github.com/huggingface/smolagents

What will you build with smolagents? Let us know! πŸš€πŸ’‘
albertvillanovaΒ 
posted an update 4 months ago
view post
Post
4063
πŸš€ Introducing @huggingface Open Deep-ResearchπŸ’₯

In just 24 hours, we built an open-source agent that:
βœ… Autonomously browse the web
βœ… Search, scroll & extract info
βœ… Download & manipulate files
βœ… Run calculations on data

55% on GAIA validation set! Help us improve it!πŸ’‘
https://huggingface.co/blog/open-deep-research
  • 3 replies
Β·
albertvillanovaΒ 
posted an update 5 months ago
lhoestqΒ 
posted an update 6 months ago
view post
Post
2290
Made a HF Dataset editor a la gg sheets here: lhoestq/dataset-spreadsheets

With Dataset Spreadsheets:
✏️ Edit datasets in the UI
πŸ”— Share link with collaborators
🐍 Use locally in DuckDB or Python

Available for the 100,000+ parquet datasets on HF :)
albertvillanovaΒ 
posted an update 7 months ago
view post
Post
1876
🚨 How green is your model? 🌱 Introducing a new feature in the Comparator tool: Environmental Impact for responsible #LLM research!
πŸ‘‰ open-llm-leaderboard/comparator
Now, you can not only compare models by performance, but also by their environmental footprint!

🌍 The Comparator calculates COβ‚‚ emissions during evaluation and shows key model characteristics: evaluation score, number of parameters, architecture, precision, type... πŸ› οΈ
Make informed decisions about your model's impact on the planet and join the movement towards greener AI!
albertvillanovaΒ 
posted an update 7 months ago
view post
Post
1641
πŸš€ New feature of the Comparator of the πŸ€— Open LLM Leaderboard: now compare models with their base versions & derivatives (finetunes, adapters, etc.). Perfect for tracking how adjustments affect performance & seeing innovations in action. Dive deeper into the leaderboard!

πŸ› οΈ Here's how to use it:
1. Select your model from the leaderboard.
2. Load its model tree.
3. Choose any base & derived models (adapters, finetunes, merges, quantizations) for comparison.
4. Press Load.
See side-by-side performance metrics instantly!

Ready to dive in? πŸ† Try the πŸ€— Open LLM Leaderboard Comparator now! See how models stack up against their base versions and derivatives to understand fine-tuning and other adjustments. Easier model analysis for better insights! Check it out here: open-llm-leaderboard/comparator 🌐
albertvillanovaΒ 
posted an update 7 months ago
view post
Post
3240
πŸš€ Exciting update! You can now compare multiple models side-by-side with the Hugging Face Open LLM Comparator! πŸ“Š

open-llm-leaderboard/comparator

Dive into multi-model evaluations, pinpoint the best model for your needs, and explore insights across top open LLMs all in one place. Ready to level up your model comparison game?
albertvillanovaΒ 
posted an update 7 months ago
view post
Post
1290
🚨 Instruct-tuning impacts models differently across families! Qwen2.5-72B-Instruct excels on IFEval but struggles with MATH-Hard, while Llama-3.1-70B-Instruct avoids MATH performance loss! Why? Can they follow the format in examples? πŸ“Š Compare models: open-llm-leaderboard/comparator
albertvillanovaΒ 
posted an update 7 months ago
view post
Post
1991
Finding the Best SmolLM for Your Project

Need an LLM assistant but unsure which hashtag#smolLM to run locally? With so many models available, how can you decide which one suits your needs best? πŸ€”

If the model you’re interested in is evaluated on the Hugging Face Open LLM Leaderboard, there’s an easy way to compare them: use the model Comparator tool: open-llm-leaderboard/comparator
Let’s walk through an exampleπŸ‘‡

Let’s compare two solid options:
- Qwen2.5-1.5B-Instruct from Alibaba Cloud Qwen (1.5B params)
- gemma-2-2b-it from Google (2.5B params)

For an assistant, you want a model that’s great at instruction following. So, how do these two models stack up on the IFEval task?

What about other evaluations?
Both models are close in performance on many other tasks, showing minimal differences. Surprisingly, the 1.5B Qwen model performs just as well as the 2.5B Gemma in many areas, even though it's smaller in size! πŸ“Š

This is a great example of how parameter size isn’t everything. With efficient design and training, a smaller model like Qwen2.5-1.5B can match or even surpass larger models in certain tasks.

Looking for other comparisons? Drop your model suggestions below! πŸ‘‡