Loïck BOURDOIS

lbourdois

AI & ML interests

👀

Recent Activity

reacted to Wauplin's post with 🔥 about 13 hours ago
‼️ huggingface_hub's v0.30.0 is out with our biggest update of the past two years! Full release notes: https://github.com/huggingface/huggingface_hub/releases/tag/v0.30.0. 🚀 Ready. Xet. Go! Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by [xet-core](https://github.com/huggingface/xet-core), a Rust-based package that handles all the low-level details. You can start using Xet today by installing the optional dependency: ```bash pip install -U huggingface_hub[hf_xet] ``` With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet. Blog post: https://huggingface.co/blog/xet-on-the-hub Docs: https://huggingface.co/docs/hub/en/storage-backends#xet ⚡ Inference Providers - We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models. - Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate. - Centralized billing: manage your budget and set team-wide spending limits for Inference Providers! Available to all Enterprise Hub organizations. ```py from huggingface_hub import InferenceClient client = InferenceClient(provider="fal-ai", bill_to="my-cool-company") image = client.text_to_image( "A majestic lion in a fantasy forest", model="black-forest-labs/FLUX.1-schnell", ) image.save("lion.png") ``` - No more timeouts when generating videos, thanks to async calls. Available right now for Fal.ai, expecting more providers to leverage the same structure very soon!
reacted to Wauplin's post with 🚀 about 13 hours ago
‼️ huggingface_hub's v0.30.0 is out with our biggest update of the past two years! Full release notes: https://github.com/huggingface/huggingface_hub/releases/tag/v0.30.0. 🚀 Ready. Xet. Go! Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by [xet-core](https://github.com/huggingface/xet-core), a Rust-based package that handles all the low-level details. You can start using Xet today by installing the optional dependency: ```bash pip install -U huggingface_hub[hf_xet] ``` With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet. Blog post: https://huggingface.co/blog/xet-on-the-hub Docs: https://huggingface.co/docs/hub/en/storage-backends#xet ⚡ Inference Providers - We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models. - Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate. - Centralized billing: manage your budget and set team-wide spending limits for Inference Providers! Available to all Enterprise Hub organizations. ```py from huggingface_hub import InferenceClient client = InferenceClient(provider="fal-ai", bill_to="my-cool-company") image = client.text_to_image( "A majestic lion in a fantasy forest", model="black-forest-labs/FLUX.1-schnell", ) image.save("lion.png") ``` - No more timeouts when generating videos, thanks to async calls. Available right now for Fal.ai, expecting more providers to leverage the same structure very soon!
reacted to Wauplin's post with 🤗 about 13 hours ago
‼️ huggingface_hub's v0.30.0 is out with our biggest update of the past two years! Full release notes: https://github.com/huggingface/huggingface_hub/releases/tag/v0.30.0. 🚀 Ready. Xet. Go! Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by [xet-core](https://github.com/huggingface/xet-core), a Rust-based package that handles all the low-level details. You can start using Xet today by installing the optional dependency: ```bash pip install -U huggingface_hub[hf_xet] ``` With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet. Blog post: https://huggingface.co/blog/xet-on-the-hub Docs: https://huggingface.co/docs/hub/en/storage-backends#xet ⚡ Inference Providers - We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models. - Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate. - Centralized billing: manage your budget and set team-wide spending limits for Inference Providers! Available to all Enterprise Hub organizations. ```py from huggingface_hub import InferenceClient client = InferenceClient(provider="fal-ai", bill_to="my-cool-company") image = client.text_to_image( "A majestic lion in a fantasy forest", model="black-forest-labs/FLUX.1-schnell", ) image.save("lion.png") ``` - No more timeouts when generating videos, thanks to async calls. Available right now for Fal.ai, expecting more providers to leverage the same structure very soon!
View all activity

Organizations

Notebooks-explorers's profile picture Hugging Face Fellows's profile picture FRAUG's profile picture Word2vec's profile picture Blog-explorers's profile picture huggingPartyParis's profile picture ZeroGPU Explorers's profile picture Social Post Explorers's profile picture Hugging Face Discord Community's profile picture Les papiers de Merve's profile picture Bretagne's profile picture ml-fw-prerelease's profile picture

lbourdois's activity

reacted to Wauplin's post with 🔥🚀🤗 about 13 hours ago
view post
Post
891
‼️ huggingface_hub's v0.30.0 is out with our biggest update of the past two years!

Full release notes: https://github.com/huggingface/huggingface_hub/releases/tag/v0.30.0.

🚀 Ready. Xet. Go!

Xet is a groundbreaking new protocol for storing large objects in Git repositories, designed to replace Git LFS. Unlike LFS, which deduplicates files, Xet operates at the chunk level—making it a game-changer for AI builders collaborating on massive models and datasets. Our Python integration is powered by [xet-core](https://github.com/huggingface/xet-core), a Rust-based package that handles all the low-level details.

You can start using Xet today by installing the optional dependency:

pip install -U huggingface_hub[hf_xet]


With that, you can seamlessly download files from Xet-enabled repositories! And don’t worry—everything remains fully backward-compatible if you’re not ready to upgrade yet.

Blog post: https://huggingface.co/blog/xet-on-the-hub
Docs: https://huggingface.co/docs/hub/en/storage-backends#xet


⚡ Inference Providers

- We’re thrilled to introduce Cerebras and Cohere as official inference providers! This expansion strengthens the Hub as the go-to entry point for running inference on open-weight models.

- Novita is now our 3rd provider to support text-to-video task after Fal.ai and Replicate.

- Centralized billing: manage your budget and set team-wide spending limits for Inference Providers! Available to all Enterprise Hub organizations.

from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai", bill_to="my-cool-company")
image = client.text_to_image(
    "A majestic lion in a fantasy forest",
    model="black-forest-labs/FLUX.1-schnell",
)
image.save("lion.png")


- No more timeouts when generating videos, thanks to async calls. Available right now for Fal.ai, expecting more providers to leverage the same structure very soon!
·
posted an update 12 days ago
view post
Post
2120
We introduce FAT5 (Flash Attention T5) ⚡

An implementation of T5 in PyTorch with UL2 objective optimized for GPGPU for both training and inference thanks to 13 different optimizations.
The main one is that we have designed a CUDA kernel to expand the Flash Attention by @tridao with RPE biases and supports other PE such as RoPE, ALiBi or FIRE.
The result kernel is 2 times faster than a SPDA implementation.
We also use Triton kernels to optimize certain parts of the architecture, such as the cross-entropy and RMSNorm layer.

The various kernels have been carefully built to be compatible with BF16 and torch.compile to go even faster and achieve efficient pretraining.

All other optimizations are described in a 📝 subsequent blog post available on @huggingface 🤗: CATIE-AQ/FAT5-report.

This methodology enabled us to efficiently pretrain as a proof of concept a FAT5 with 147M parameters in French in a reasonable time (1,461H for 419B tokens), with limited resources (1 A100 i.e. a computational budget of ~ €1,900) and a low carbon footprint (13.5kg eq CO2).

The model's weights are also available on Hugging Face: CATIE-AQ/FAT5-small.
Not very useful in practice, it's a PoC and not an instructed model (it's planned for later).

All the code is available on GitHub if you want to pretrain your own model in your own language or for a specific domain: https://github.com/catie-aq/flashT5

Ending by indicating that was a joint project with @BorisAlbar at hf.co/CATIE-AQ.
reacted to Wauplin's post with 🔥🤗 6 months ago
view post
Post
3118
What a great milestone to celebrate! The huggingface_hub library is slowly becoming a cornerstone of the Python ML ecosystem when it comes to interacting with the @huggingface Hub. It wouldn't be there without the hundreds of community contributions and feedback! No matter if you are loading a model, sharing a dataset, running remote inference or starting jobs on our infra, you are for sure using it! And this is only the beginning so give a star if you wanna follow the project 👉 https://github.com/huggingface/huggingface_hub
  • 1 reply
·
reacted to davanstrien's post with 🚀👀 6 months ago
view post
Post
3210
ColPali is revolutionizing multimodal retrieval, but could it be even more effective with domain-specific fine-tuning?

Check out my latest blog post, where I guide you through creating a ColPali fine-tuning dataset using Qwen/Qwen2-VL-7B-Instruct to generate queries for a collection of UFO documents sourced from the Internet Archive.

The post covers:
- Introduction to data for ColPali models
- Using Qwen2-VL for retrieval query generation
- Tips for better query generation

Check out the post here:
https://danielvanstrien.xyz/posts/post-with-code/colpali/2024-09-23-generate_colpali_dataset.html

The resulting Hugging Face dataset: davanstrien/ufo-ColPali
  • 1 reply
·
reacted to tomaarsen's post with ❤️👀🚀🔥 6 months ago
view post
Post
2101
🎉SetFit v1.1.0 is out! Training efficient classifiers on CPU or GPU now uses the Sentence Transformers Trainer, and we resolved a lot of issues caused by updates of third-party libraries (like Transformers). Details:

Training a SetFit classifier model consists of 2 phases:
1. Finetuning a Sentence Transformer embedding model
2. Training a Classifier to map embeddings -> classes

🔌The first phase now uses the SentenceTransformerTrainer that was introduced in the Sentence Transformers v3 update. This brings some immediate upsides like MultiGPU support, without any (intended) breaking changes.

➡️ Beyond that, we softly deprecated the "evaluation_strategy" argument in favor of "eval_strategy" (following a Transformers deprecation), and deprecated Python 3.7. In return, we add official support for Python 3.11 and 3.12.

✨ There's some more minor changes too, like max_steps and eval_max_steps now being a hard limit instead of an approximate one, training/validation losses now logging nicely in Notebooks, and the "device" parameter no longer being ignored in some situations.

Check out the full release notes here: https://github.com/huggingface/setfit/releases/tag/v1.1.0
Or read the documentation: https://huggingface.co/docs/setfit
Or check out the public SetFit models for inspiration: https://huggingface.co/models?library=setfit&sort=created

P.s. the model in the code snippet trained in 1 minute and it can classify ~6000 sentences per second on my GPU.
reacted to merve's post with ❤️🤝🤗 8 months ago
reacted to severo's post with ❤️🚀 8 months ago
replied to their post 12 months ago
view reply

Merci !

I'll try to avoid a 4-month gap with the next article 🙃
The year 2023 and in particular the second half of the year has been quite busy. So maybe I'll split the 2023 article in two.

posted an update 12 months ago
view post
Post
3806
I stopped procrastinating and finally took the time to write the second article of my series of blog posts on SSM: https://huggingface.co/blog/lbourdois/ssm-2022.
In this blog post, I review the history of SSM models released in 2022, with over 14 models discussed in a synthetic format.
They are separated into two parts: "theoretical" (DSS, S4D, GSS, Mega, S5, etc.) and "applications" (Sashimi, ViS4mer, CCNN, etc.).

To understand everything, it's best to have read the introduction to S4 to SSM blog post first: https://huggingface.co/blog/lbourdois/get-on-the-ssm-train.
All the articles in the series are listed in this space: lbourdois/SSM_blog_posts

Wishing you a good reading :)
  • 2 replies
·
reacted to kargaranamir's post with 👍 about 1 year ago