Hugging Face

Enterprise
company
Verified
Activity Feed

AI & ML interests

The AI community building the future.

Recent Activity

julien-c  updated a dataset about 13 hours ago
huggingface/documentation-images
pagezyhf  updated a dataset about 15 hours ago
huggingface/documentation-images
medmekk  new activity about 21 hours ago
huggingface/documentation-images:auto-round
View all activity

Articles

huggingface's activity

merve 
posted an update about 17 hours ago
view post
Post
744
Don't sleep on new AI at Meta Vision-Language release! 🔥

facebook/perception-encoder-67f977c9a65ca5895a7f6ba1
facebook/perception-lm-67f9783f171948c383ee7498

Meta dropped swiss army knives for vision with A2.0 license 👏
> image/video encoders for vision language modelling and spatial understanding (object detection etc) 👏
> The vision LM outperforms InternVL3 and Qwen2.5VL 👏
> They also release gigantic video and image datasets

The authors attempt to come up with single versatile vision encoder to align on diverse set of tasks.

They trained Perception Encoder (PE) Core: a new state-of-the-art family of vision encoders that can be aligned for both vision-language and spatial tasks. For zero-shot image tasks, it outperforms latest sota SigLIP2 👏



> Among fine-tuned ones, first one is PE-Spatial. It's a model to detect bounding boxes, segmentation, depth estimation and it outperforms all other models 😮



> Second one is PLM, Perception Language Model, where they combine PE-Core with Qwen2.5 LM 7B. it outperforms all other models (including InternVL3 which was trained with Qwen2.5LM too!)

The authors release the following checkpoints in sizes base, large and giant:

> 3 PE-Core checkpoints (224, 336, 448)
> 2 PE-Lang checkpoints (L, G)
> One PE-Spatial (G, 448)
> 3 PLM (1B, 3B, 8B)
> Datasets



Authors release following datasets 📑
> PE Video: Gigantic video datasete of 1M videos with 120k expert annotations ⏯️
> PLM-Video and PLM-Image: Human and auto-annotated image and video datasets on region-based tasks
> PLM-VideoBench: New video benchmark on MCQA
  • 1 reply
·

auto-round

#482 opened 1 day ago by
wenhuach
burtenshaw 
posted an update 2 days ago
view post
Post
1800
The rebooted LLM course starts today with an overhauled chapter 1 on Transformers:

👉 Follow the org to join the course: huggingface-course

We’re starting from the foundations of modern generative AI by looking at transformers. This chapter is expanded in depth and features so contains new material like:

FREE and CERTIFIED exam on fundamentals of transformers
deeper exploration of transformer architectures and attention mechanisms
end -to-end exploration of inference strategies for prefill and decode steps

The course has leveled up in complexity and depth, so this a great time to join in if you want to build you own AI models.
AdinaY 
posted an update 3 days ago
view post
Post
2676
MAGI-1 🪄 the autoregressive diffusion video model, released by Sand AI

sand-ai/MAGI-1

✨ 24B with Apache 2.0
✨ Strong temporal consistency
✨ Benchmark-topping performance
  • 1 reply
·
merve 
posted an update 3 days ago
view post
Post
2662
New foundation model on image and video captioning just dropped by NVIDIA AI 🔥

Describe Anything Model (DAM) is a 3B vision language model to generate detailed captions with localized references 😮

The team released the models, the dataset, a new benchmark and a demo 🤩 nvidia/describe-anything-680825bb8f5e41ff0785834c

Most of the vision LMs focus on image as a whole, lacking localized references in captions, and not taking in visual prompts (points, boxes, drawings around objects)

DAM addresses this on two levels: new vision backbone that takes in focal crops and the image itself, and a large scale dataset 👀

They generate a dataset by extending existing segmentation and referring expression generation datasets like REFCOCO, by passing in the images and classes to VLMs and generating captions.

Lastly, they also release a new benchmark again with self-supervision, they use an LLM to evaluate the detailed captions focusing on localization 👏
davanstrien 
posted an update 3 days ago
view post
Post
1758
Came across a very nice submission from @marcodsn for the reasoning datasets competition (https://huggingface.co/blog/bespokelabs/reasoning-datasets-competition).

The dataset distils reasoning chains from arXiv research papers in biology and economics. Some nice features of the dataset:

- Extracts both the logical structure AND researcher intuition from academic papers
- Adopts the persona of researchers "before experiments" to capture exploratory thinking
- Provides multi-short and single-long reasoning formats with token budgets - Shows 7.2% improvement on MMLU-Pro Economics when fine-tuning a 3B model

It's created using the Curator framework with plans to scale across more scientific domains and incorporate multi-modal reasoning with charts and mathematics.

I personally am very excited about datasets like this, which involve creativity in their creation and don't just rely on $$$ to produce a big dataset with little novelty.

Dataset can be found here: marcodsn/academic-chains (give it a like!)
linoyts 
posted an update 4 days ago
AdinaY 
posted an update 4 days ago
albertvillanova 
posted an update 4 days ago
view post
Post
2246
smolagents v1.14.0 is out! 🚀
🔌 MCPClient: A sleek new client for connecting to remote MCP servers, making integrations more flexible and scalable.
🪨 Amazon Bedrock: Native support for Bedrock-hosted models.
SmolAgents is now more powerful, flexible, and enterprise-ready. 💼

Full release 👉 https://github.com/huggingface/smolagents/releases/tag/v1.14.0
#smolagents #LLM #AgenticAI
AdinaY 
posted an update 5 days ago
AdinaY 
posted an update 8 days ago
view post
Post
1839
Wan2.1-FLF2V🎥 a 14B start-end frame video generation model just released by Alibaba_Wan🔥

Wan-AI/Wan2.1-FLF2V-14B-720P

✨ Give it two images (start & end), it generates a smooth, high-quality video in between.
✨ Apache 2.0 licensed
✨ Built on DiT + Flow Matching
  • 1 reply
·
burtenshaw 
posted an update 9 days ago
view post
Post
1728
Hacked my presentation building with inference providers, Cohere command a, and sheer simplicity. Use this script if you’re burning too much time on presentations:

🔗 https://github.com/burtenshaw/course_generator/blob/main/scripts/create_presentation.py

This is what it does:
- uses command a to generates slides and speaker notes based on some material.
- it renders the material in remark open format and imports all images, tables, etc
- you can then review the slides as markdown and iterate
- export to either pdf or pptx using backslide

🚀 Next steps are: add text to speech for the audio and generate a video. This should make Hugging Face educational content scale to a billion AI Learners.
  • 1 reply
·
AdinaY 
posted an update 10 days ago
view post
Post
722
After yesterday's wave of reveals, here's what's going down today in the Chinese AI community 🔥

✨ Kuaishou unveiled Kling AI 2.0
https://klingai.com/global/

✨ MiniMax AI dropped their latest TTS model Speech-02
https://minimax.io/audio

✨ Tencent Hunyuan teased the upcoming open model - Hunyuan Portrait
HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation (2503.18860)

✨ ModelScope launched an MCP Square, with 1,500 MCPs already online
https://modelscope.cn/mcp

And it's only Tuesday🌞
AdinaY 
posted an update 11 days ago
view post
Post
941
🔥 Big day for the Chinese open source AI community: zh-ai-community

> Skywork AI :
Released 7B/32B reasoning models excels in math & coding
Skywork/skywork-or1-67fa1bcb41b436ef2def76b9

> Moonshot AI & Numina:
Dropped 1.5B/7B POWERFUL formal math reasoning models
AI-MO/kimina-prover-preview-67fb536b883d60e7ca25d7f9

> Zhipu AI :
Launched 9B/32B reasoning models powering their first general AI agent - AutoGLM ✨
THUDM/glm-4-0414-67f3cbcb34dd9d252707cb2e

> DeepSeek :
Announced to open source its internal inference engine: DeepSeek Inference Engine
https://github.com/deepseek-ai/open-infra-index/blob/main/OpenSourcing_DeepSeek_Inference_Engine/README.md

Can't wait for more exciting releases coming 🥳


  • 1 reply
·