AI & ML interests

None defined yet.

Recent Activity

blog-explorers's activity

meg 
posted an update 1 day ago
giux78 
posted an update 16 days ago
view post
Post
2275
LLAMA4 release highlight the importance of political and social bias. According to their own evaluation described in the release blog post:
- Refusals on contentious prompts dropped from 7% (hashtag#LLAMA 3.3) to under 2%
- Unequal response refusals are now under 1%
- Political lean bias is said to be halved compared to hashtag#LLaMA 3.3 and comparable to Grok

However, we @efederici @mferraretto @FinancialSupport and I released some weeks ago an independent open source benchmark called Propaganda to measure political bias in LLMs: https://github.com/mii-llm/propaganda

In the chart below, we evaluated multiple leading models on the basis of ratings across a range of prompts designed to expose ideological leanings.

Despite Meta’s stated neutrality goals, LLAMA4 ranks at the very top in terms of total ratings aligned with a clear ideological bias. The models were tested on their ability to respond even-handedly to politically sensitive prompts. LLaMA 4 scored even higher than models known for strong alignment policies like GPT-4o.

LLMs may be refusing less, but they still show bias through content framing. This suggests that refusal rates alone are not a sufficient measure of ideological bias. Relying solely on internal evaluations from AI labs also raises concerns about transparency and objectivity.
jjokah 
posted an update 17 days ago
view post
Post
2336
# Video Tokenization — for efficient AI video processing

Meet 𝐕𝐢𝐝𝐓𝐨𝐤, a new open-source video tokenization technique developed by Microsoft Research to address the computational challenges of processing large volumes of video data. The core problem VidTok tackles is the inefficiency caused by redundant information in raw video pixels.

VidTok converts complex video footage into compact, structured units called tokens, making it easier and more efficient for AI systems to analyze, understand, and generate video content.

Research Paper: https://arxiv.org/abs/2412.13061
VidTok Code: https://github.com/microsoft/VidTok
stefan-it 
posted an update 25 days ago
view post
Post
2251
Wohoo 🥳 I have finished my 2025 GPU workstation build and I am very excited to train new awesome open source models on it.

I built my last GPU workstation 5 years ago featuring an AMD Ryzen 5900X, 64GB of G.SKILL Trident Z RGB on an ASRock X570 Taichi cooled by an Alphacool Eisbär 420. GPU was a Zotac RTX 3090 AMP Extreme. Unfortunately, I was never satisfied with the case - some Fractal Define 7, as it is definitely too small, airflow is not optimal as I had to open the front door all the time and it also arrived with a partly damaged side panel.

For my new build, I've used the following components: an outstanding new AMD Ryzen 9950X3D with 64GB of Corsair Dominator Titanium (what a name). As a huge Noctua fan - warm greetings to my Austrian neighbors - I am using the brand new Noctua NH-D15 G2 on an ASRock X870E Taichi in an amazing Lian Li LANCOOL III chassis. One joke that only NVIDIA Blackwell users will understand: you definitely need a tempered glass panel to check if your GPU cables/connectors start melting 😂 And the best is yet to come: I returned my previously bought Zotac RTX 5090 Solid to the eBay seller (because of... missing ROPs, only NVIDIA Blackwell users will again understand) and bought a Zotac 5090 AMP Extreme INFINITY (yes, the long name indicates that this is the flagship model from Zotac) from a more trustworthy source (NBB in Germany).

I am so happy to start training and fine-tuning new open source models - stay tuned!!!
  • 2 replies
·
giux78 
posted an update 28 days ago
view post
Post
3181
This is truly an inspirational story please help us spread the word, @clem , @thomwolf and everyone who supports open source AI.

A few weeks ago, @mmuffo94 and @cittiberto from indigo_ai launched the Chatbot Arena for the Italian language: https://indigo.ai/it/chatbot-arena-italia/.

To our surprise, among the top-ranked models is mii-llm/maestrale-chat-v0.4-beta a carefully fine-tuned version of mistralai/Mistral-7B-v0.1, developed by @efederici and @mferraretto from mii-llm , and released nearly a year ago.

At this very moment, as shown in the screenshot, mii-llm/maestrale-chat-v0.4-beta is ranked 8th right between ChatGPT-4.5 and ChatGPT-4o.

It's likely that for several months, the best Italian speaking LLM has been an open source 7B model created by open source contributors and hardly anyone knew it.
  • 2 replies
·
chansung 
posted an update 29 days ago
view post
Post
3481
simple guide on the recipe for GRPO on Open-R1 which is built on top of TRL

I think FastAPI wrapper of vLLM with WeightSyncWorker is pretty cool feature. Also, we have many predefined reward functions out of the box!
·
samchain 
posted an update 30 days ago
view post
Post
830
NLP for Economics 1.2 is out !

This collection features two models:
- EconoSentiment : a first version based on econo-sentence-v2 and trained on the Financial PhraseBank, showcasing great performances.
- EconoDetect-US : a classifier to detect texts related to the US economy.

And two datasets:
- economics-relevance : the HF version of the Kaggle dataset US Economics News
- imf-weo-reports : A first version and gated dataset aggregating several World Economic Outlooks from the IMF
  • 1 reply
·
louisbrulenaudet 
posted an update about 1 month ago
view post
Post
935
I’ve just released logfire-callback on PyPI, designed to facilitate monitoring of Hugging Face Transformer training loops using Pydantic Logfire 🤗

The callback will automatically log training start with configuration parameters, periodic metrics and training completion ⏱️

Install the package using pip:
pip install logfire-callback

First, ensure you have a Logfire API token and set it as an environment variable:
export LOGFIRE_TOKEN=your_logfire_token

Then use the callback in your training code:
from transformers import Trainer, TrainingArguments
from logfire_callback import LogfireCallback

# Initialize your model, dataset, etc.

training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=3,
    # ... other training arguments
)

trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    callbacks=[LogfireCallback()]  # Add the Logfire callback here
)

trainer.train()

If you have any feedback, please reach out at @louisbrulenaudet
chansung 
posted an update about 1 month ago
view post
Post
2581
Mistral AI Small 3.1 24B is not only commercial free but also the best model in a single GPU deployment.

I packed up all the information you need to know in a single picture. Hope this helps! :)
  • 1 reply
·
mlabonne 
posted an update about 1 month ago
mlabonne 
posted an update about 1 month ago
view post
Post
6135
✂️ Gemma 3 Abliterated

I noticed that Gemma 3 was much more resilient to refusal removal than other models like Qwen 2.5.

I experimented with different recipes and improved the abliteration technique I wrote about last year.

It's still experimental but the refusal rate is super low in my tests. Enjoy!

mlabonne/gemma-3-4b-it-abliterated
mlabonne/gemma-3-12b-it-abliterated
mlabonne/gemma-3-27b-it-abliterated

  • 1 reply
·
giux78 
posted an update about 1 month ago
view post
Post
2873
@ mii-llm with @efederici @mferraretto @FinancialSupport and @DeepMount00 we just released #Propaganda a framework designed to evaluate and train LLMs on political opinions and bias. We aim to analyze both open-source and closed-source LLMs to understand the political positions and biases expressed in their outputs. Moreover we provide a set of recipes to enforce political positions into the models by creating ad hoc curated datasets and by applying fine tuning techniques. By releasing our work in the open, we hope to foster contributions: https://github.com/mii-llm/propaganda

This framework offers opportunities for expansion in various directions and could become the standard reference for evaluating LLMs on political topics, particularly those that influence public opinion.
chansung 
posted an update about 1 month ago
view post
Post
1570
Gemma 3 Release in a nutshell
(seems like function calling is not supported whereas the announcement said so)
mcpotato 
posted an update about 2 months ago
view post
Post
2488
Stoked to announce we've partnered with JFrog to continue improving safety on the Hub! 🐸

Their model scanner brings new scanning capabilities to the table, aimed at reducing alert fatigue.

More on that in our blog post: https://huggingface.co/blog/jfrog
  • 1 reply
·
christopher 
in blog-explorers/README about 2 months ago

[Support] Community Articles

1
83
#5 opened about 1 year ago by
victor