Jared Sulzdorf PRO
AI & ML interests
Recent Activity
Organizations
jsulz's activity

Hey @mradermacher just wanted to let you know that we've begun onboarding you to Xet!
All new repos that you create will be Xet-enabled by default. We are still migrating existing repos, so you will see times when there are a mixture of LFS and Xet files side-by-side, but as the migration progresses everything will become Xet.
As I mentioned in my last message, none of this is an issue due to how we've designed the system for backward compatibility, but if you have any questions or concerns, please let me know. Otherwise, I'll follow up here once all your repos are migrated!

Inspired by Tiny Agents in JS from @julien-c , we ported the idea to Python and integrated it directly into
huggingface_hub
— with a built-in MCP Client and a Tiny Agents CLI.TL;DR: With MCP (Model Context Protocol), you can expose tools like web search or image generation and connect them directly to LLMs. It’s simple — and surprisingly powerful.
pip install "huggingface_hub[mcp]>=0.32.0"
We wrote a blog post where we show how to run Tiny Agents, and dive deeper into how they work and how to build your own.
👉 https://huggingface.co/blog/python-tiny-agents

Tiny Agents in Python: a MCP-powered agent in ~70 lines of code
Static Spaces can now have a build step
Xet is now the default storage option for new users and organizations

Woohoo!! Thanks for joining ❤️ I'll onboard you from the waitlist soon and follow up here when done.
Will do on the storage side - I'm also quite curious.
If you have any questions or feedback, don't hesitate to ping me here 🤗
got my first xet error , and it leaks a token (i think?)



We've been onboarding folks https://huggingface.co/blog/xet-on-the-hub know the backend can scale (Llama 4 and Qwen 3 are on Xet), is great for working with quants (see xet-team/quantization-dedup ), and we're pushing on inviting impactful orgs and users on the Hub. You fit the bill.
We'd love to onboard you, get some feedback, and create some excitement 🎉
The steps are pretty straightforward - join the waitlist at hf.co/join/xet and we'll take care of the rest.
The system is fully backward compatible, so you shouldn't notice a thing. BUT to get the best experience when uploading/downloading, make sure you have
hf_xet
installed alongside the latest huggingface_hub
What do you think?
Woohoo! Xet team member here. Thanks for signing up @mradermacher 🤗
The migration process should be very seamless. Because of the way Xet supports backward compatibility - can read about it here if you're interested https://huggingface.co/docs/hub/storage-backends#backward-compatibility-with-lfs - everyone will continue to be able to access the repos before, during, and after the migration.
I'll onboard you from the waitlist this week and then follow up once everything is moved over! If you have any questions, don't hesitate to follow up here and @ me, happy to keep supporting all the work you're doing :)

as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!
in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.
p.s. you'd need to have a the latest hf_xet version of huggingface_hub lib but everything else should be the same: https://huggingface.co/docs/hub/storage-backends#using-xet-storage
p.p.s. this is fully backwards compatible so everything will work as it should! 🤗