AurΓ©lien-Morgan CLAUDON

Aurelien-Morgan

AI & ML interests

None yet

Recent Activity

Organizations

Giskard's profile picture Gradio-Blocks-Party's profile picture Keras Dreambooth Event's profile picture Blog-explorers's profile picture huggingPartyParis's profile picture ZeroGPU Explorers's profile picture C4AI Community's profile picture Chinese LLMs on Hugging Face's profile picture Paris AI Running Club's profile picture cvmistralparis's profile picture Hugging Face Discord Community's profile picture Hugging Face Party @ PyTorch Conference's profile picture Nerdy Face's profile picture retrain-pipelines's profile picture

Aurelien-Morgan's activity

reacted to freddyaboulton's post with πŸ€—πŸ”₯ about 14 hours ago
view post
Post
1523
Getting WebRTC and Websockets right in python is very tricky. If you've tried to wrap an LLM in a real-time audio layer then you know what I'm talking about.

That's where FastRTC comes in! It makes WebRTC and Websocket streams super easy with minimal code and overhead.

Check out our org: hf.co/fastrtc
upvoted an article 1 day ago
view article
Article

FastRTC: The Real-Time Communication Library for Python

β€’ 75
updated a Space 1 day ago
reacted to lysandre's post with ❀️ 3 days ago
view post
Post
5161
SmolVLM-2 and SigLIP-2 are now part of transformers in dedicated releases!

They're added on top of the v4.49.0 release, and can be installed from the following tags: v4.49.0-SmolVLM-2 and v4.49.0-SigLIP-2.

This marks a new beginning for the release process of transformers. For the past five years, we've been doing monthly releases featuring many models (v4.49.0, the latest release, features 9 new architectures).

Starting with SmolVLM-2 & SigLIP2, we'll now additionally release tags supporting new models on a stable branch. These models are therefore directly available for use by installing from the tag itself. These tags will continue to be updated with fixes applied to these models.

Going forward, continue expecting software releases following semantic versioning: v4.50.0 will have ~10 new architectures compared to v4.49.0, as well as a myriad of new features, improvements and bug fixes. Accompanying these software releases, we'll release tags offering brand new models as fast as possible, to make them accessible to all immediately.
  • 1 reply
Β·
reacted to jsulz's post with πŸš€β€οΈ 5 days ago
view post
Post
3195
Time flies!

Six months after joining Hugging Face the Xet team is kicking off the first migrations from LFS to our storage for a number of repositories on the Hub.

More on the nitty gritty details behind the migration soon, but here are the big takeaways:

πŸ€– We've successfully completed the first migrations from LFS -> Xet to test the infrastructure and prepare for a wider release

βœ… No action on your part needed - you can work with a Xet-backed repo like any other repo on the Hub (for now - major improvements on their way!)

πŸ‘€ Keep an eye out for the Xet logo to see if a repo you know is on our infra! See the screenshots below to spot the difference πŸ‘‡

⏩ ⏩ ⏩ Blazing uploads and downloads coming soon. W’re gearing up for a full integration with the Hub's Python library that will make building on the Hub faster than ever - special thanks to @celinah and @Wauplin for their assistance.

πŸŽ‰ Want Early Access? If you’re curious and want to test it out the bleeding edge that will power the development experience on the Hub, we’d love to partner with you. Let me know!

This is the culmination of a lot of effort from the entire team. Big round of applause to @sirahd @brianronan @jgodlewski @hoytak @seanses @assafvayner @znation @saba9 @rajatarya @port8080 @yuchenglow
  • 1 reply
Β·
reacted to fdaudens's post with ❀️ 5 days ago
replied to AdinaY's post 6 days ago
reacted to AdinaY's post with πŸ”₯ 6 days ago
reacted to merve's post with β€οΈπŸš€ 6 days ago
view post
Post
5055
Google just released PaliGemma 2 Mix: new versatile instruction vision language models πŸ”₯

> Three new models: 3B, 10B, 28B with res 224, 448 πŸ’™
> Can do vision language tasks with open-ended prompts, understand documents, and segment or detect anything 🀯

Read more https://huggingface.co/blog/paligemma2mix
Try the demo google/paligemma2-10b-mix
All models are here google/paligemma-2-mix-67ac6a251aaf3ee73679dcc4
upvoted an article 6 days ago
view article
Article

Ο€0 and Ο€0-FAST: Vision-Language-Action Models for General Robot Control

β€’ 109