Ame Vi's picture
7 16

Ame Vi

Ameeeee

AI & ML interests

None yet

Recent Activity

Organizations

Hugging Face's profile picture Argilla's profile picture Women on Hugging Face's profile picture Data Is Better Together's profile picture Social Post Explorers's profile picture HuggingFaceFW-Dev's profile picture Data Is Better Together Contributor's profile picture Bluesky Community's profile picture Hugging Face DG's profile picture

Ameeeee's activity

reacted to albertvillanova's post with ๐Ÿค— 18 days ago
view post
Post
2401
New in smolagents v1.16.0:
๐Ÿ” Bing support in WebSearchTool
๐Ÿ Custom functions & executor_kwargs in LocalPythonExecutor
๐Ÿ”ง Streaming GradioUI fixes
๐ŸŒ Local web agents via api_base & api_key
๐Ÿ“š Better docs

๐Ÿ‘‰ https://github.com/huggingface/smolagents/releases/tag/v1.16.0
reacted to burtenshaw's post with ๐Ÿš€ 18 days ago
view post
Post
3105
We're thrilled to announce the launch of our comprehensive Model Context Protocol (MCP) Course! This free program is designed to take learners from foundational understanding to practical application of MCP in AI.

Follow the course on the hub: mcp-course

In this course, you will:
๐Ÿ“– Study Model Context Protocol in theory, design, and practice.
๐Ÿง‘โ€๐Ÿ’ป Learn to use established MCP SDKs and frameworks.
๐Ÿ’พ Share your projects and explore applications created by the community.
๐Ÿ† Participate in challenges and evaluate your MCP implementations.
๐ŸŽ“ Earn a certificate of completion.

At the end of this course, you'll understand how MCP works and how to build your own AI applications that leverage external data and tools using the latest MCP standards.
  • 1 reply
ยท
reacted to fdaudens's post with ๐Ÿ‘€ 20 days ago
view post
Post
786
Hey! I built an AI Agent to query the FOIA API for a workshop at the Hacks/Hackers Summit in Baltimore and you can do it too!

Itโ€™s a quick proof of concept to demo what agents can do, how to design workflows, and how to approach the coding side. TWant a fun project to learn how AI agents work? I built one that queries the FOIA API โ€” and you can too!

It's a quick proof of concept I did for a workshop at the Hacks/Hackers Summit in Baltimore, demonstrating what agents can do, how to design workflows, and approaches to coding them.

- Slides https://docs.google.com/presentation/d/1lbf5K0yi213N7uxGnVKJdGWq2i0GayWj4vIcLkVlwD8/edit?usp=sharing
- Colab notebook https://colab.research.google.com/drive/1iw0qZyTni_6BcK0jj1x6gTfjm85NlaGv
- Gradio app: https://huggingface.co/spaces/JournalistsonHF/foia-agent
- MCP version to plug into Claude, Cursor, etc: https://huggingface.co/spaces/JournalistsonHF/foia-mcp-tools

Feel free to use the Gradio app for real FOIA requests, but also to improve it (I'm far from being a good coder) or adapt it for other countries.

And shout-out to everyone who powered through the workshop! ๐Ÿ˜…
  • 1 reply
ยท
reacted to m-ric's post with ๐Ÿ‘๐Ÿ‘€ 20 days ago
view post
Post
2616
๐—”๐—ฏ๐˜€๐—ผ๐—น๐˜‚๐˜๐—ฒ ๐—ญ๐—ฒ๐—ฟ๐—ผ: ๐—Ÿ๐—Ÿ๐— ๐˜€ ๐—ฐ๐—ฎ๐—ป ๐˜๐—ฟ๐—ฎ๐—ถ๐—ป ๐˜„๐—ถ๐˜๐—ต๐—ผ๐˜‚๐˜ ๐—ฎ๐—ป๐˜† ๐—ฒ๐˜…๐˜๐—ฒ๐—ฟ๐—ป๐—ฎ๐—น ๐—ฑ๐—ฎ๐˜๐—ฎ ๐Ÿคฏ

Has the "data wall" just been breached?

Recent RL paradigms often relied on a set of questions an answers that needs to be manually curated. Researchers from Tsinghua University went like "why though".

๐Ÿค” Indeed, why learn from question designed by a human teacher, when the model can start from their base knowledge and learn by experimenting in a code environment, proposing coding tasks themselves and trying to solve them?

Thus they created โ€œAbsolute Zero Reasoningโ€ (AZR), an approach that removes any need for human curated data.

๐ŸŽญ ๐——๐˜‚๐—ฎ๐—น ๐—ฟ๐—ผ๐—น๐—ฒ๐˜€:
โ€ฃ Proposer: Generates challenging but solvable coding tasks
โ€ฃ Solver: Attempts to solve those self-proposed tasks

๐Ÿงช ๐—ง๐—ต๐—ฟ๐—ฒ๐—ฒ ๐˜๐—ฎ๐˜€๐—ธ ๐˜๐˜†๐—ฝ๐—ฒ๐˜€: all types are defined as triplets of program, input and output
โ€ฃ Deduction: Give model an input and program, it must deduce the output
โ€ฃ Abduction: Give model an program and output, it must find the input that gave said output
โ€ฃ Induction: Synthesize a program from input/output pairs
Btw this reminded me of my long-forgotten philosophy classes: Aristotle was more on the induction side, learning from real-world analogies, while Plato was more on the deduction side, trying to progress quite far with just one input and his reasoning.

๐Ÿ“Š ๐—ฅ๐—ฒ๐˜€๐˜‚๐—น๐˜๐˜€:
โ€ฃ AZR post-training creates a nice improvement on known models like Qwen2.5-7B
โ€ฃ Shows strong cross-domain transfer: coding โ†”๏ธ math reasoning

๐Ÿง ๐—ข๐˜๐—ต๐—ฒ๐—ฟ ๐—ณ๐—ถ๐—ป๐—ฑ๐—ถ๐—ป๐—ด๐˜€:
โ€ฃ Having a better base performance (general or code specific) amplify the gains from Absolute Zero Reasoning
โ€ฃ Researchers warn about "Uh-oh moments" (winking to the "aha moments" of DeepSeek) where the model generates concerning goals like "make an extremely convoluted code to outsmart all these humans": so supervision is still needed!

Paper here: Absolute Zero: Reinforced Self-play Reasoning with Zero Data (2505.03335)
reacted to clem's post with ๐Ÿค—๐Ÿš€๐Ÿ”ฅ 20 days ago
view post
Post
3126
Very cool to see pytorch contributing on Hugging Face. Time to follow them to see what they're cooking!
  • 2 replies
ยท
reacted to jeffboudier's post with ๐Ÿš€ 20 days ago
view post
Post
2564
Transcribing 1 hour of audio for less than $0.01 ๐Ÿคฏ

@mfuntowicz cooked with 8x faster Whisper speech recognition - whisper-large-v3-turbo transcribes at 100x real time on a $0.80/hr L4 GPU!

How they did it: https://huggingface.co/blog/fast-whisper-endpoints

1-click deploy with HF Inference Endpoints: https://endpoints.huggingface.co/new?repository=openai%2Fwhisper-large-v3-turbo&vendor=aws&region=us-east&accelerator=gpu&instance_id=aws-us-east-1-nvidia-l4-x1&task=automatic-speech-recognition&no_suggested_compute=true
reacted to AdinaY's post with ๐Ÿ”ฅ 20 days ago
reacted to dhruv3006's post with ๐Ÿ˜Ž๐Ÿ”ฅ๐Ÿš€ 20 days ago
view post
Post
2997
Lumier โ€“ Run macOS & Linux VMs in a Docker

Lumier is an open-source tool for running macOS virtual machines in Docker containers on Apple Silicon Macs.

When building virtualized environments for AI agents, we needed a reliable way to package and distribute macOS VMs. Inspired by projects like dockur/macos that made macOS running in Docker possible, we wanted to create something similar but optimized for Apple Silicon.

The existing solutions either didn't support M-series chips or relied on KVM/Intel emulation, which was slow and cumbersome. We realized we could leverage Apple's Virtualization Framework to create a much better experience.

Lumier takes a different approach: It uses Docker as a delivery mechanism (not for isolation) and connects to a lightweight virtualization service (lume) running on your Mac.

Lumier is 100% open-source under MIT license and part of C/ua.

Github : https://github.com/trycua/cua/tree/main/libs/lumier
Join the discussion here : https://discord.gg/fqrYJvNr4a

reacted to fdaudens's post with โค๏ธ 20 days ago
view post
Post
5102
Tried something new: an AI-generated podcast that breaks down the top research paper each day. Fully automated, now live on Spotify.

I built this prototype to help keep up with the rapid pace of AI developments and, hopefully, make cutting-edge research more accessible. I donโ€™t know about you, but just listening to a conversation about a paper really helps the content sink in for me.

This build taught me a lot about full automation. If youโ€™re into the technical weeds: Qwen3 runs on Inference to handle the script, Kokoro does the voice, and the whole thing gets published automatically thanks to the Hugging Face Jobs API and Gradio deployment.

Itโ€™s not perfect yet โ€” Iโ€™ll be monitoring for hallucinations and incoherence. The voice model still needs polish, but itโ€™s a promising start. Would love to build this with the community โ€” submit a PR or send feedback. Itโ€™s just a beta of an experimental idea!

Big kudos to @m-ric , whose Open NotebookLM this is based on, and to @nielsr for his terrific work making research papers more accessible.

- Podcast on Spotify: https://open.spotify.com/show/3PTucIW1w1GIkqTYm32ka7?si=c7a851f83e6d4331 (Apple Podcasts soon)
- Code: fdaudens/podcast-jobs
- Open NotebookLM: m-ric/open-notebooklm
- Also super helpful, @qgallouedec 's tutorial on HF Jobs API: qgallouedec/run-hello-world
  • 1 reply
ยท
reacted to tomaarsen's post with ๐Ÿ”ฅ 3 months ago
view post
Post
6773
An assembly of 18 European companies, labs, and universities have banded together to launch ๐Ÿ‡ช๐Ÿ‡บ EuroBERT! It's a state-of-the-art multilingual encoder for 15 European languages, designed to be finetuned for retrieval, classification, etc.

๐Ÿ‡ช๐Ÿ‡บ 15 Languages: English, French, German, Spanish, Chinese, Italian, Russian, Polish, Portuguese, Japanese, Vietnamese, Dutch, Arabic, Turkish, Hindi
3๏ธโƒฃ 3 model sizes: 210M, 610M, and 2.1B parameters - very very useful sizes in my opinion
โžก๏ธ Sequence length of 8192 tokens! Nice to see these higher sequence lengths for encoders becoming more common.
โš™๏ธ Architecture based on Llama, but with bi-directional (non-causal) attention to turn it into an encoder. Flash Attention 2 is supported.
๐Ÿ”ฅ A new Pareto frontier (stronger *and* smaller) for multilingual encoder models
๐Ÿ“Š Evaluated against mDeBERTa, mGTE, XLM-RoBERTa for Retrieval, Classification, and Regression (after finetuning for each task separately): EuroBERT punches way above its weight.
๐Ÿ“ Detailed paper with all details, incl. data: FineWeb for English and CulturaX for multilingual data, The Stack v2 and Proof-Pile-2 for code.

Check out the release blogpost here: https://huggingface.co/blog/EuroBERT/release
* EuroBERT/EuroBERT-210m
* EuroBERT/EuroBERT-610m
* EuroBERT/EuroBERT-2.1B

The next step is for researchers to build upon the 3 EuroBERT base models and publish strong retrieval, zero-shot classification, etc. models for all to use. I'm very much looking forward to it!
  • 1 reply
ยท
reacted to fdaudens's post with ๐Ÿ”ฅ 3 months ago
view post
Post
3127
Is this the best tool to extract clean info from PDFs, handwriting and complex documents yet?

Open source olmOCR just dropped and the results are impressive.

Tested the free demo with various documents, including a handwritten Claes Oldenburg letter. The speed is impressive: 3000 tokens/second on your own GPU - that's 1/32 the cost of GPT-4o ($190/million pages). Game-changer for content extraction and digital archives.

To achieve this, Ai2 trained a 7B vision language model on 260K pages from 100K PDFs using "document anchoring" - combining PDF metadata with page images.

Best part: it actually understands document structure (columns, tables, equations) instead of just jumbling everything together like most OCR tools. Their human eval results back this up.

๐Ÿ‘‰ Try the demo: https://olmocr.allenai.org

Going right into the AI toolkit: JournalistsonHF/ai-toolkit
  • 3 replies
ยท
reacted to burtenshaw's post with ๐Ÿ‘ 3 months ago
view post
Post
5698
I made a real time voice agent with FastRTC, smolagents, and hugging face inference providers. Check it out in this space:

๐Ÿ”— burtenshaw/coworking_agent
ยท
reacted to davidberenstein1957's post with ๐Ÿ˜Žโž•๐Ÿค— 6 months ago
view post
Post
1725
Letโ€™s make a generation of amazing image-generation models

The best image generation models are trained on human preference datasets, where annotators have selected the best image from a choice of two. Unfortunately, many of these datasets are closed source so the community cannot train open models on them. Letโ€™s change that!

The community can contribute image preferences for an open-source dataset that could be used for building AI models that convert text to image, like the flux or stable diffusion families. The dataset will be open source so everyone can use it to train models that we can all use.

Blog: https://huggingface.co/blog/burtenshaw/image-preferences