Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

mlabonne 
posted an update 2 days ago
view post
Post
2533
LiquidAI open-sources a new generation of edge LLMs! 🥳

Based on a new hybrid architecture, these 350M, 700M, and 1.2B models are both fast and performant, ideal for on-device deployment.

I recommend fine-tuning them to power your next edge application. We already provide Colab notebooks to guide you. More to come soon!

📝 Blog post: https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models
🤗 Models: LiquidAI/lfm2-686d721927015b2ad73eaa38
  • 1 reply
·
nroggendorff 
posted an update 2 days ago
view post
Post
2353
Since when are H200s on ZeroGPU?
  • 2 replies
·
hba123 
posted an update 2 days ago
view post
Post
1915
I am happy to announce that Ark now supports the following robots:

1. Franka Panda
2. Kuka LWR
3. UFactory XArm
4. Husky Robot

Everything is done in Python. You can even control your robot from a Jupiter notebook.

Check out the tutorials: https://arkrobotics.notion.site/ARK-Home-22be053d9c6f8096bcdbefd6276aba61

Check out the code: https://github.com/orgs/Robotics-Ark/repositories

Check out the documentation: https://robotics-ark.github.io/ark_robotics.github.io/docs/html/index.html

Check out the paper: https://robotics-ark.github.io/ark_robotics.github.io/static/images/ark_framework_2025.pdf

Hope you find it useful. Let us know if you want a specific feature! We would love to support you 😄
  • 5 replies
·
AdinaY 
posted an update about 19 hours ago
view post
Post
563
Kimi-K2 is now available on the hub🔥🚀
This is a trillion-parameter MoE model focused on long context, code, reasoning, and agentic behavior.

moonshotai/kimi-k2-6871243b990f2af5ba60617d

✨ Base & Instruct
✨ 1T total / 32B active - Modified MIT License
✨ 128K context length
✨ Muon optimizer for stable trillion-scale training
  • 1 reply
·
kanaria007 
posted an update about 21 hours ago
view post
Post
666
✅ New Article on Hugging Face: Teaching AI to Think Like a System — Not a Toolkit

Title:
🏗️ Understanding Structured Cognitive Architecture: A Unified Framework for AI Reasoning Systems
🔗 Read it here: https://huggingface.co/blog/kanaria007/understanding-structured-cognitive-architecture

Summary:
After exploring how AI can select reasoning modes or learn from failure, this new article zooms out:
*How do all these capabilities form a single mind, not just a menu of functions?*

The **Structured Cognitive Architecture** defines a unified framework where protocols interact coherently — forming a self-organizing, reflective, and ethically grounded reasoning system.

This architecture enables agents to:
• Integrate memory, ethics, reasoning, and identity across layers
• Select and execute reasoning jumps with traceable structure
• Coordinate failure recovery and adaptive learning
• Maintain cross-session identity and self-editing capability

It’s not modular stacking.
It’s **structured systemhood** — cognition with intentional protocol interaction.

Key Features:
• Three-layer design (Foundational, Extended, Learning)
• Semantic rule-layer to avoid protocol interference
• Integrated flow: problem → jump → feedback → pattern update
• Built-in ethics, rollback, and trace integrity

The framework integrates protocols like:
jump-generator, failure-trace-log, memory-loop, identity-construct
• Extended modules: chronia, structure-cross, evaluation-planning, and more

🧠 Protocol Dataset: kanaria007/agi-structural-intelligence-protocols

Useful for:
• Researchers designing unified AGI architectures
• Developers building reflective protocol-based agents
• Anyone curious how AI can think as a *system*

This isn’t modularity.
It’s **meta-coherence by design**.
  • 2 replies
·
MonsterMMORPG 
posted an update 1 day ago
view post
Post
1531
MultiTalk (from MeiGen) Full Tutorial With 1-Click Installer - Make Talking and Singing Videos From Static Images - Moreover shows how to setup and use on RunPod and Massed Compute private cheap cloud services as well

Tutorial video link > https://youtu.be/8cMIwS9qo4M

Video Chapters

0:00 Intro & MultiTalk Showcase
0:28 Singing Animation Showcase
0:57 Tutorial Structure Overview (Windows, Massed Compute, RunPod)
1:10 Windows - Step 1: Download & Extract the Main ZIP File
1:43 Windows - Prerequisites (Python, Git, CUDA, FFmpeg)
2:12 Windows - How to Perform a Fresh Installation (Deleting venv & custom_nodes)
2:42 Windows - Step 2: Running the Main ComfyUI Installer Script
4:24 Windows - Step 3: Installing MultiTalk Nodes & Dependencies
5:05 Windows - Step 4: Downloading Models with the Unified Downloader
6:18 Windows - Tip: Setting Custom Model Paths in ComfyUI
7:18 Windows - Step 5: Updating ComfyUI to the Latest Version
7:39 Windows - Step 6: Launching ComfyUI
7:53 Workflow Usage - Using the 480p 10-Second Workflow
8:07 Workflow Usage - Configuring Basic Parameters (Image, Audio, Resolution)
8:55 Workflow Usage - Optimizing Performance: 'Blocks to Swap' & GPU Monitoring
9:49 Workflow Usage - Crucial Step: Calculating & Setting the Number of Frames
10:48 Workflow Usage - First Generation: Running the 480p Workflow
12:01 Workflow Usage - Troubleshooting: How to Fix 'Out of VRAM' Errors
13:51 Workflow Usage - Introducing the High-Quality Long Context Workflow (720p)
14:09 Workflow Usage - Configuring the 720p 10-Step High-Quality Workflow
16:18 Workflow Usage - Selecting the Correct Model (GGUF) & Attention Mechanism
17:58 Workflow Usage - Improving Results by Changing the Seed
18:36 Workflow Usage - Side-by-Side Comparison: 480p vs 720p High-Quality
20:26 Workflow Usage - Behind the Scenes: How the Intro Videos Were Made
21:32 Part 2: Massed Compute Cloud GPU Tutorial
22:03 Massed Compute - Deploying a GPU Instance (H100)
.
.
.
sergiopaniego 
posted an update 3 days ago
view post
Post
1370
Test SmolLM3, the newest fully open model released by @HuggingFaceTB !

It's smol (3B), multilingual (6 languages), comes with dual mode reasoning (think/no_think modes) and supports long-context (128k).

Try it now in the notebook below!! ⬇️

Colab notebook: https://colab.research.google.com/github/sergiopaniego/samples/blob/main/smollm3_3b_inference.ipynb
notebook: https://github.com/sergiopaniego/samples/blob/main/smollm3_3b_inference.ipynb
blog: https://huggingface.co/blog/smollm3
3LC 
posted an update about 13 hours ago
view post
Post
481
🚀 Announcing the Synthetic-to-Real Multi-Class Object Detection Challenge!

We’re excited to announce the launch of the Synthetic-to-Real Multi-Class Object Detection Challenge—now live on Kaggle!

This exciting competition is brought to you by 3LC in partnership with Duality AI, creators of the powerful FalconCloud tool for generating targeted synthetic data. Together, we're offering a unique opportunity to push the boundaries of object detection through high-fidelity, simulation-to-real workflows.

🧪 What Makes This Challenge Special?
💻 Create customized training data with Duality’s cloud-based scenario
🧠 Analyze data weaknesses and take precise, data-driven actions using 3LC's robust tooling
⚙️ Optimize data for peak model training

🏆 Why Join?
• Win cash prizes, certificates, and global recognition
• Gain exposure to real-world simulation workflows used in top AI companies
• Collaborate and compete with leading minds in computer vision, ML, and AI

Whether you're a student, researcher, or industry pro, this challenge is your chance to bridge the Sim2Real gap and showcase your skills in building high-performance object detection models.

🔗 Ready to compete?
https://www.kaggle.com/competitions/multi-class-object-detection-challenge

CultriX 
posted an update 1 day ago
view post
Post
944
New Space: Generate Knowledge Graphs from input data using LLM's (OpenRouter). It's a trial project but seems to be working alright so far!

CultriX/Generate-Knowledge-Graphs

Below is an example after feeding it the wikipedia page about Elon Musk:
jbilcke-hf 
posted an update 3 days ago
view post
Post
2377
Are you looking to run a robot simulator, maybe run long robot policy training tasks, but you don't have the GPU at home?

Well.. you can run MuJoCo inside a Hugging Face space!

All you have to do is to clone this space:
jbilcke-hf/train-robots-with-mujoco

Don't forget to a pick a Nvidia GPU for your space, to be able to get some nice OpenGL renders!

Are you new to MuJoCo and/or JupyterLab notebooks?

You can get started with this tutorial (select "Open from URL" then paste the URL to this notebook):
jbilcke-hf/train-robots-with-mujoco

Happy robot hacking! 🦾
  • 2 replies
·