Unpopular opinion: Open Source takes courage to do !
Not everyone is brave enough to release what they have done (the way they've done it) to the wild to be judged ! It really requires a high level of "knowing wth are you doing" ! It's kind of a super power !
Well, this is a bit late but consider given our recent blog a read if you are interested in Evaluation.
You don't have to be into Arabic NLP in order to read it, the main contribution we are introducing is a new evaluation measure for NLG. We made the fisrt application of this measure on Arabic for now and we will be working with colleagues from the community to expand it to other languages.
🌐 Announcing Global-MMLU: an improved MMLU Open dataset with evaluation coverage across 42 languages, built with Argilla and the Hugging Face community.
Global-MMLU is the result of months of work with the goal of advancing Multilingual LLM evaluation. It's been an amazing open science effort with collaborators from Cohere For AI, Mila - Quebec Artificial Intelligence Institute, EPFL, Massachusetts Institute of Technology, AI Singapore, National University of Singapore, KAIST, Instituto Superior Técnico, Carnegie Mellon University, CONICET, and University of Buenos Aires.
🏷️ +200 contributors used Argilla MMLU questions where regional, dialect, or cultural knowledge was required to answer correctly. 85% of the questions required Western-centric knowledge!
Thanks to this annotation process, the open dataset contains two subsets:
1. 🗽 Culturally Agnostic: no specific regional, cultural knowledge is required. 2. ⚖️ Culturally Sensitive: requires dialect, cultural knowledge or geographic knowledge to answer correctly.
Moreover, we provide high quality translations of 25 out of 42 languages, thanks again to the community and professional annotators leveraging Argilla on the Hub.
I hope this will ensure a better understanding of the limitations and challenges for making open AI useful for many languages.
Build datasets for AI on the Hugging Face Hub—10x easier than ever!
Today, I'm excited to share our biggest feature since we joined Hugging Face.
Here’s how it works:
1. Pick a dataset—upload your own or choose from 240K open datasets. 2. Paste the Hub dataset ID into Argilla and set up your labeling interface. 3. Share the URL with your team or the whole community!
And the best part? It’s: - No code – no Python needed - Integrated – all within the Hub - Scalable – from solo labeling to 100s of contributors
I am incredibly proud of the team for shipping this after weeks of work and many quick iterations.
Let's make this sentence obsolete: "Everyone wants to do the model work, not the data work."
I feel like this incredible resource hasn't gotten the attention it deserves in the community!
@clefourrier and generally the HuggingFace evaluation team put together a fantastic guidebook covering a lot about 𝗘𝗩𝗔𝗟𝗨𝗔𝗧𝗜𝗢𝗡 from basics to advanced tips.
Big news! You can now build strong ML models without days of human labelling
You simply: - Define your dataset, including annotation guidelines, labels and fields - Optionally label some records manually. - Use an LLM to auto label your data with a human (you? your team?) in the loop!
Don't you think we should add a tag "Evaluation" for datasets that are meant to be benchmarks and not for training ?
At least, when someone is collecting a group of datasets from an organization or let's say the whole hub can filter based on that tag and avoid somehow contaminating their "training" data.