Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

SelmaNajih001 
posted an update 2 days ago
view post
Post
1706
Finally, I uploaded the model I developed for my master’s thesis! Given a financial event, it provides explained predictions based on a dataset of past news and central bank speeches.
Try it out here:
SelmaNajih001/StockPredictionExplanation
(Just restart the space and wait a minute)

The dataset used for RAG can be found here:
SelmaNajih001/FinancialNewsAndCentralBanksSpeeches-Summary-Rag
While the dataset used for the training is:
SelmaNajih001/FinancialClassification

I also wrote an article to explain how I've done the training. You can find it here https://huggingface.co/blog/SelmaNajih001/explainable-financial-predictions

  • 2 replies
·
Molbap 
posted an update 1 day ago
view post
Post
1862
🚀 New blog: Maintain the unmaintainable – 1M+ Python LOC, 400+ models

How do you stop a million-line library built by thousands of contributors from collapsing under its own weight?
At 🤗 Transformers, we do it with explicit software-engineering tenets, principles that make the codebase hackable at scale.

🔍 Inside the post:
– One Model, One File: readability first — you can still open a modeling file and see the full logic, top to bottom.
– Modular Transformers: visible inheritance that cuts maintenance cost by ~15× while keeping models readable.
– Config-Driven Performance: FlashAttention, tensor parallelism, and attention scheduling are config-level features, not rewrites.

Written with @lysandre ,@pcuenq and @yonigozlan , this is a deep dive into how Transformers stays fast, open, and maintainable.

Read it here → transformers-community/Transformers-tenets
sergiopaniego 
posted an update 2 days ago
view post
Post
2470
A few days ago, Thinking Machines Lab released “LoRA Without Regret”, showing that LoRA can match full fine-tuning performance when configured right.

Naturally, we decided to reproduce the results with TRL and release a guide!

https://huggingface.co/docs/trl/main/en/lora_without_regret
Kseniase 
posted an update 3 days ago
view post
Post
3559
8 Emerging trends in Reinforcement Learning

Reinforcement learning is having a moment - and not just this week. Some of its directions are already showing huge promise, while others are still early but exciting. Here’s a look at what’s happening right now in RL:

1. Reinforcement Pre-Training (RPT) → Reinforcement Pre-Training (2506.08007)
Reframes next-token pretraining as RL with verifiable rewards, yielding scalable reasoning gains

2. Reinforcement Learning from Human Feedback (RLHF) → Deep reinforcement learning from human preferences (1706.03741)
The top approach. It trains a model using human preference feedback, building a reward model and then optimizing the policy to generate outputs people prefer

3. Reinforcement Learning with Verifiable Rewards (RLVR) → Reinforcement Learning with Verifiable Rewards Implicitly Incentivizes Correct Reasoning in Base LLMs (2506.14245)
Moves from subjective (human-labeled) rewards to objective ones that can be automatically verified, like in math, code, or rubrics as reward, for example → Reinforcement Learning with Rubric Anchors (2508.12790), Rubrics as Rewards: Reinforcement Learning Beyond Verifiable Domains (2507.17746)

4. Multi-objective RL → Pareto Multi-Objective Alignment for Language Models (2508.07768)
Trains LMs to balance multiple goals at once, like being helpful but also concise or creative, ensuring that improving one goal doesn’t ruin another

5. Parallel thinking RL → Parallel-R1: Towards Parallel Thinking via Reinforcement Learning (2509.07980)
Trains parallel chains of thought, boosting math accuracy and final ceilings. It first teaches the model “parallel thinking” skill on easier problems, then uses RL to refine it on harder ones

Read further below ⬇️
And if you like this, subscribe to the Turing post: https://www.turingpost.com/subscribe

Also, check out our recent guide about the past, present and future of RL: https://www.turingpost.com/p/rlguide
·
prithivMLmods 
posted an update about 16 hours ago
view post
Post
321
Have built the new Image Studio with the Gemini Image Gen models for the following multiple tasks: imagen-4.0-fast-generate-001 model for Image Generation (Text-to-Image) and Multi-Image Editing (Image-to-Image), and Draw-to-Image powered by gemini-2.5-flash-image (aka Nano Banana).

⭐ Gemini-Image-Studio: prithivMLmods/Gemini-Image-Studio (Latest)
🤞 Old-App: prithivMLmods/Nano-Banana-AIO
🥊 GitHub: https://github.com/prithivsakthiur/gemini-image-studio-hf

To proceed, you need to add your Gemini API key. Your API key is stored only for the duration of your session and will be lost when you reload or exit the page. It will not be shared or exposed anywhere.
AdamF92 
posted an update about 23 hours ago
view post
Post
660
Hi, I just published research paper that's introducing my Reactive Transformer (RxT) architecture. I would be grateful if you could check it and upvote on HuggingFace Daily Papers - Reactive Transformer (RxT) -- Stateful Real-Time Processing for Event-Driven Reactive Language Models (2510.03561)

Architecture is based on stateful real-time processing with innovational asynchronous memory update. Instead of reprocessing all the conversation history for each message, it's processing only single query with all the context moved to dedicated memory layers. Memory is updated after generating the answer, so it's not influencing latency - in tests, time to first token was almost the same as generating a single token. It has also better quality/accuracy in multi-turn dialogue than the same size stateless decoder-only model.

Initial experiments were small scale (12M to 160M params models trained on simple synthetic datasets), but just now I'm starting training of bigger 270M params model on real data



Collection: ReactiveAI/reactive-transformer-poc-rxt-alpha-supervised-models-68e4004a4a59366e01a7b86f
Profile: ReactiveAI
evijit 
posted an update 1 day ago
view post
Post
1515
AI for Scientific Discovery Won't Work Without Fixing How We Collaborate.

My co-author @cgeorgiaw and I just published a paper challenging a core assumption: that the main barriers to AI in science are technical. They're not. They're social.

Key findings:

🚨 The "AI Scientist" myth delays progress: Waiting for AGI devalues human expertise and obscures science's real purpose: cultivating understanding, not just outputs.
📊 Wrong incentives: Datasets have 100x longer impact than models, yet data curation is undervalued.
⚠️ Broken collaboration: Domain scientists want understanding. ML researchers optimize performance. Without shared language, projects fail.
🔍 Fragmentation costs years: Harmonizing just 9 cancer files took 329 hours.

Why this matters: Upstream bottlenecks like efficient PDE solvers could accelerate discovery across multiple sciences. CASP mobilized a community around protein structure, enabling AlphaFold. We need this for dozens of challenges.

Thus, we're launching Hugging Science! A global community addressing these barriers through collaborative challenges, open toolkits, education, and community-owned infrastructure. Please find all the links below!

Paper: AI for Scientific Discovery is a Social Problem (2509.06580)
Join: hugging-science
Discord: https://discord.com/invite/VYkdEVjJ5J
lunarflu 
posted an update 2 days ago
view post
Post
1563
Cool stuff these past weeks on huggingface! 🤗 🚀 !
• 📈Trackio, local-first W&B alternative
https://github.com/gradio-app/trackio/issues
• 🌍EmbeddingGemma, 300M-param, multilingual embeddings, on-device
https://huggingface.co/blog/embeddinggemma
• 💻Open LLMs in VS Code (Inference Providers)
https://x.com/reach_vb/status/1966185427582497171
• 🤖Smol2Operator GUI agents
https://huggingface.co/blog/smol2operator
• 🖼️Gradio visible watermarking
https://huggingface.co/blog/watermarking-with-gradio
sequelbox 
posted an update 1 day ago
view post
Post
688
NEW RELEASE: Esper 3.1!

- Esper is our full-stack, full-cycle coding, DevOps, and architecture specialist!
- Our newest, best DeepSeek technical datasets emphasize more challenging queries and tough real-world coding tasks across a variety of programming languages and development paradigms:
- Titanium 3 for coding and reasoning in DevOps and architecture: sequelbox/Titanium3-DeepSeek-V3.1-Terminus
- Tachibana 3 for high-difficulty code production in a variety of topics and programming languages:
- sequelbox/Tachibana3-Part1-DeepSeek-V3.1-Terminus
- sequelbox/Tachibana3-Part2-DeepSeek-V3.2
- Mitakihara for MLOps, AI building, use, expertise, and research: sequelbox/Mitakihara-DeepSeek-R1-0528

Our first release in the Esper 3.1 series is built on Qwen3-4B-Thinking-2507. GET IT NOW, FOR EVERYONE: ValiantLabs/Qwen3-4B-Thinking-2507-Esper3.1

We'll be bringing Esper 3.1 to more, larger models as soon as we can; you can help this happen faster with a donation: sequelbox/SupportOpenSource

We're really happy about this one; let us know how Esper 3.1 works for you!

Support open source. It's our only hope for an AI future you'll actually want to live in.

More to come soon!

with our love and appreciation,
allegra
codelion 
posted an update 1 day ago
view post
Post
1018
🚀 Adaptive Classifier v0.1.0: Now with ONNX Runtime Support!

We're excited to announce a major update to Adaptive Classifier - a flexible, continuous learning classification system that adapts to new classes without retraining!

What's New:

⚡ ONNX Runtime Integration: Get 1.14x faster CPU inference out of the box (up to 4x on x86 processors)

📦 INT8 Quantization: Models are now 4x smaller with minimal accuracy loss, making deployment easier and faster

🎯 Smart Loading: Automatically uses the best model variant for your hardware - quantized for speed by default, or unquantized for maximum accuracy

🔄 7.5x Faster Model Loading: Get started quickly with optimized model initialization

How It Works:

Adaptive Classifier lets you build text classifiers that continuously learn from new examples without catastrophic forgetting. Perfect for:
- Dynamic classification tasks where classes evolve over time
- Few-shot learning scenarios with limited training data
- Production systems that need to adapt to new categories

The new ONNX support means you get production-ready speed on CPU without any code changes - just load and run!

Try it now:

from adaptive_classifier import AdaptiveClassifier

# Load with ONNX automatically enabled (quantized for best performance)
classifier = AdaptiveClassifier.load("adaptive-classifier/llm-router")

# Add examples dynamically
classifier.add_examples(
["Route this to GPT-4", "Simple task for GPT-3.5"],
["strong", "weak"]
)

# Predict with optimized inference
predictions = classifier.predict("Complex reasoning task")

Check out our LLM Router model to see it in action:
adaptive-classifier/llm-router

GitHub Repository:
https://github.com/codelion/adaptive-classifier

Install now: pip install adaptive-classifier

We'd love to hear your feedback and see what you build with it!

#MachineLearning #NLP #ONNX #ContinuousLearning #TextClassification