CléMax

community

AI & ML interests

None defined yet.

Recent Activity

clemax's activity

clefourrier 
posted an update about 13 hours ago
view post
Post
110
Saying Claude 4 is "the best coding model in the world" from the SWEBench scores is super misleading, and here is why:

If you look at the announcement table, their model has the best scores, but... if you look at the very bottom, in font 4, you'll see that the metric they report is actually not the same metric as the one used for the other models!


Comparing "pass@1 averaged 10 times" to "normal pass@1" is like grading one student by allowing them to take the test 10 times and averaging question scores, when the other students only get one chance at grading.

The first way to grade (avg@10) is actually quite good statistically, much better than what model creators usually report, because models tend to be quite inconsistent - sometimes good, sometimes bad...
But! You want to do it for all models then, and report with error bars.
The issue is that, if you do... well, it's going to be harder to say your model is the best, because the error bars will overlap between models, by a lot.

Also, you'll see that 2 numbers are reported: the first one is using avg@10 (what I explained above), and the second, highest one is using this plus many other tricks:
- test time compute (so having the model generate a tree of answers and selecting the best as you go, more or less)
- removing the times when the model breaks the tests
- and using another model to select the most promising solution!
You can't really say it's better than the rest, mostly because it's **way less efficient** to achieve a similar result.

It's honestly a bit sad because from user reports, the model sounds good - however, this announcement is overblown numbers wise, and I'm quite sure it's more a problem of "too much marketing" than of "bad science"

Another thing which makes the comparison invalid is the complete absence of open source from the report - don't think they are aware of DeepSeek/ Qwen/The new mistral for code/and all the cool specialised models found on the hub?
clefourrier 
posted an update 5 days ago
view post
Post
462
Always surprised that so few people actually read the FineTasks blog, on
✨how to select training evals with the highest signal✨

If you're serious about training models without wasting compute on shitty runs, you absolutely should read it!!

An high signal eval actually tells you precisely, during training, how wel & what your model is learning, allowing you to discard the bad runs/bad samplings/...!

The blog covers in depth prompt choice, metrics, dataset, across languages/capabilities, and my fave section is "which properties should evals have"👌
(to know on your use case how to select the best evals for you)

Blog: HuggingFaceFW/blogpost-fine-tasks
  • 2 replies
·
clefourrier 
posted an update 2 months ago
view post
Post
2471
Gemma3 family is out! Reading the tech report, and this section was really interesting to me from a methods/scientific fairness pov.

Instead of doing over-hyped comparisons, they clearly state that **results are reported in a setup which is advantageous to their models**.
(Which everybody does, but people usually don't say)

For a tech report, it makes a lot of sense to report model performance when used optimally!
On leaderboards on the other hand, comparison will be apples to apples, but in a potentially unoptimal way for a given model family (like some user interact sub-optimally with models)

Also contains a cool section (6) on training data memorization rate too! Important to see if your model will output the training data it has seen as such: always an issue for privacy/copyright/... but also very much for evaluation!

Because if your model knows its evals by heart, you're not testing for generalization.
clefourrier 
posted an update about 1 year ago
view post
Post
6141
In a basic chatbots, errors are annoyances. In medical LLMs, errors can have life-threatening consequences 🩸

It's therefore vital to benchmark/follow advances in medical LLMs before even thinking about deployment.

This is why a small research team introduced a medical LLM leaderboard, to get reproducible and comparable results between LLMs, and allow everyone to follow advances in the field.

openlifescienceai/open_medical_llm_leaderboard

Congrats to @aaditya and @pminervini !
Learn more in the blog: https://huggingface.co/blog/leaderboard-medicalllm
clefourrier 
posted an update about 1 year ago
view post
Post
4780
Contamination free code evaluations with LiveCodeBench! 🖥️

LiveCodeBench is a new leaderboard, which contains:
- complete code evaluations (on code generation, self repair, code execution, tests)
- my favorite feature: problem selection by publication date 📅

This feature means that you can get model scores averaged only on new problems out of the training data. This means... contamination free code evals! 🚀

Check it out!

Blog: https://huggingface.co/blog/leaderboard-livecodebench
Leaderboard: livecodebench/leaderboard

Congrats to @StringChaos @minimario @xu3kev @kingh0730 and @FanjiaYan for the super cool leaderboard!
clefourrier 
posted an update about 1 year ago
view post
Post
2256
🆕 Evaluate your RL agents - who's best at Atari?🏆

The new RL leaderboard evaluates agents in 87 possible environments (from Atari 🎮 to motion control simulations🚶and more)!

When you submit your model, it's run and evaluated in real time - and the leaderboard displays small videos of the best model's run, which is super fun to watch! ✨

Kudos to @qgallouedec for creating and maintaining the leaderboard!
Let's find out which agent is the best at games! 🚀

open-rl-leaderboard/leaderboard