Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'config' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 620, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'config' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1886, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 639, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'config' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1420, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1052, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

uid
string
id
string
organization
string
name
string
created_at
timestamp[us]
last_modified
string
trending_score
int64
likes
int64
tags
sequence
config
dict
results
dict
runtime_stage
string
card_data
dict
sources
sequence
enriched
dict
approval
bool
consolidated_notes
string
65becbc4744da3e639da88d9
HaizeLabs/red-teaming-resistance-benchmark
HaizeLabs
red-teaming-resistance-benchmark
2024-02-03T23:27:00
2024-06-07 18:34:09
1
41
[ "test:public", "modality:text", "judge:auto", "submission:automatic", "eval:safety" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "pink", "colorTo": "red", "duplicated_from": null, "emoji": "πŸ’»", "license": null, "pinned": false, "sdk": "static", "sdk_version": null, "short_description": null, "title": "Redteaming Resistance Leaderboard" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 367, "daysSinceModification": 243, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
A benchmark that evaluates LLMs' resistance to adversarial prompts and safety violations across multiple categories of harmful content.
647848ca9c1f42c1f4d7e033
gaia-benchmark/leaderboard
gaia-benchmark
leaderboard
2023-06-01T07:29:14
2025-01-30 07:53:25
15
246
[ "modality:image", "modality:text", "modality:agent", "judge:auto", "submission:automatic", "test:private", "modality:video" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "yellow", "colorTo": "indigo", "duplicated_from": null, "emoji": "🦾", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "GAIA Leaderboard" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 615, "daysSinceModification": 6, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
A leaderboard for tool augmented LLMs!
6687e8366d98cab219a29b72
ttsds/benchmark
ttsds
benchmark
2024-07-05T12:33:58
2024-08-31 19:50:13
1
20
[ "test:public", "modality:audio", "eval:generation", "judge:auto", "submission:semiautomatic" ]
{}
{ "results": { "last_modified": "2024-11-19T16:49:02.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "mit", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": "Text-To-Speech (TTS) Evaluation using objective metrics.", "title": "TTSDS Benchmark and Leaderboard" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": [ "generation" ], "language": null, "modality": [ "audio" ] }, "categoryCounts": { "eval": 1, "language": null, "modality": 1 }, "categoryValues": { "judge": [ "auto" ], "submission": [ "semiautomatic" ], "test": [ "public" ] }, "daysSinceCreation": 215, "daysSinceModification": 158, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": true, "isRunning": true }, "score": 4 } }
true
Compares the quality of speech generation by text-to-speech models using automated metrics.
666f8193d148ca0bcfbca2ed
Intel/UnlearnDiffAtk-Benchmark
Intel
UnlearnDiffAtk-Benchmark
2024-06-17T00:21:39
2025-02-04 15:58:57
1
7
[ "modality:image", "eval:generation", "judge:auto", "submission:manual", "eval:safety" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "UnlearnDiffAtk Benchmark" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 233, "daysSinceModification": 1, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
A benchmark that evaluates how well diffusion models can unlearn specific concepts while maintaining generation quality and prompt alignment.
66b4896bcc8441dc730567e5
panuthept/thai_sentence_embedding_benchmark
panuthept
thai_sentence_embedding_benchmark
2024-08-08T09:01:31
2024-08-08 16:22:44
1
12
[ "test:public", "modality:text", "judge:auto", "modality:artefacts", "language:thai", "submission:semiautomatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "Thai Sentence Embedding Benchmark" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 181, "daysSinceModification": 181, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
A benchmark that evaluates Thai sentence embedding models across multiple tasks including semantic similarity, classification, and retrieval.
656449ea771319d93b10fe07
protectai/prompt-injection-benchmark
protectai
prompt-injection-benchmark
2023-11-27T07:48:58
2024-11-20 17:26:06
1
13
[ "modality:text", "judge:auto", "test:private", "eval:safety" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "yellow", "colorTo": "gray", "duplicated_from": null, "emoji": "πŸ“", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "5.6.0", "short_description": null, "title": "Prompt Injection Detection Benchmark" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 436, "daysSinceModification": 77, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
No data inside table. A benchmark that evaluates different prompt injection detection systems by measuring their ability to identify and prevent malicious prompts.
6730c9858b5a645918504e5b
StarscreamDeceptions/Multilingual-MMLU-Benchmark-Leaderboard
StarscreamDeceptions
Multilingual-MMLU-Benchmark-Leaderboard
2024-11-10T14:56:05
2024-11-25 07:51:11
1
10
[ "test:public", "language:chinese", "judge:auto", "submission:automatic", "language:yoruba", "language:italian", "language:spanish", "eval:generation", "language:english", "language:indonesian", "language:swahili", "modality:text", "language:arabic", "language:hindi", "language:french", "language:portugese", "language:german", "language:japanese", "language:bengali" ]
{}
{ "results": { "last_modified": "2024-11-13T16:41:46.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "pink", "colorTo": "purple", "duplicated_from": null, "emoji": "πŸ†", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "🌐 Multilingual MMLU Benchmark Leaderboard" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 87, "daysSinceModification": 72, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 3 } }
true
This leaderboard is dedicated to evaluating and comparing the multilingual capabilities.
66e7fad2f0053c645b8107df
Inferless/LLM-Inference-Benchmark
Inferless
LLM-Inference-Benchmark
2024-09-16T09:30:58
2024-10-03 08:38:24
0
8
[ "modality:text", "eval:performance", "judge:auto" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "LLM Inference Benchmark" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 142, "daysSinceModification": 125, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Compares the inference speed and performance of LLMs using different libraries.
65a5a7c26145ebc6e7e39243
TTS-AGI/TTS-Arena
TTS-AGI
TTS-Arena
2024-01-15T21:46:42
2025-01-30 16:45:06
12
621
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "blue", "duplicated_from": null, "emoji": "πŸ†", "license": "zlib", "pinned": true, "sdk": "gradio", "sdk_version": "5.1.0", "short_description": "Vote on the latest TTS models!", "title": "TTS Arena" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 386, "daysSinceModification": 6, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
65e58abff4a700ec2ddb2533
Pendrokar/TTS-Spaces-Arena
Pendrokar
TTS-Spaces-Arena
2024-03-04T08:47:59
2025-02-02 19:08:13
11
282
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "red", "colorTo": "red", "duplicated_from": null, "emoji": "πŸ€—πŸ†", "license": "zlib", "pinned": true, "sdk": "gradio", "sdk_version": "5.13.0", "short_description": "Blind vote on HF TTS models!", "title": "TTS Spaces Arena" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 338, "daysSinceModification": 3, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
6691e2b0804994abfc2d81c4
mteb/arena
mteb
arena
2024-07-13T02:13:04
2025-01-27 02:22:41
2
88
[ "modality:artefacts", "judge:humans" ]
{}
{ "results": { "last_modified": "2025-02-05T08:37:09.000Z" } }
RUNNING
{ "app_file": null, "colorFrom": "indigo", "colorTo": "blue", "duplicated_from": null, "emoji": "βš”οΈ", "license": null, "pinned": false, "sdk": "static", "sdk_version": null, "short_description": null, "title": "MTEB Arena" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 207, "daysSinceModification": 9, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 3 } }
true
Massive Text Embedding Benchmark (MTEB) Leaderboard
674eea98c6a6ef2849b4a0ac
bgsys/background-removal-arena
bgsys
background-removal-arena
2024-12-03T11:25:12
2025-01-31 09:20:11
9
58
[ "judge:humans", "modality:image" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "pink", "colorTo": "yellow", "duplicated_from": null, "emoji": "⚑", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "5.7.1", "short_description": null, "title": "Background Removal Arena" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 64, "daysSinceModification": 5, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Background removal leaderboard.
6710c75984a75320ee25e5aa
k-mktr/gpu-poor-llm-arena
k-mktr
gpu-poor-llm-arena
2024-10-17T08:14:17
2025-01-29 20:31:14
3
175
[ "eval:performance", "test:public", "modality:text", "judge:humans" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "purple", "duplicated_from": null, "emoji": "πŸ†", "license": "mit", "pinned": true, "sdk": "gradio", "sdk_version": "5.9.1", "short_description": "Compact LLM Battle Arena: Frugal AI Face-Off!", "title": "GPU Poor LLM Arena" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 111, "daysSinceModification": 6, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates small LLMs through human preference battles.
651f831f128d26b399db9ea5
dylanebert/3d-arena
dylanebert
3d-arena
2023-10-06T03:46:39
2025-01-24 19:44:25
4
247
[ "test:public", "judge:humans", "modality:3d" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "gray", "colorTo": "indigo", "duplicated_from": null, "emoji": "🏒", "license": "mit", "pinned": false, "sdk": "docker", "sdk_version": null, "short_description": null, "title": "3D Arena" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 488, "daysSinceModification": 12, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
The 3D Arena leaderboard evaluates generative 3D models.
669d17bbe99ea743cfde99b3
SUSTech/ChineseSafe-Benchmark
SUSTech
ChineseSafe-Benchmark
2024-07-21T14:14:19
2024-12-28 06:53:49
0
11
[ "language:chinese", "modality:text", "judge:auto", "test:private", "submission:manual", "eval:safety" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "purple", "colorTo": "purple", "duplicated_from": null, "emoji": "🌍", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "4.38.1", "short_description": null, "title": "ChineseSafe" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 199, "daysSinceModification": 39, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
A benchmark that evaluates LLMs' ability to moderate Chinese content by measuring their performance in identifying safe and unsafe text across multiple categories.
663288d87700d0f6454230ac
andrewrreed/closed-vs-open-arena-elo
andrewrreed
closed-vs-open-arena-elo
2024-05-01T18:24:24
2025-01-09 00:30:14
1
146
[ "eval:performance", "test:public", "modality:text", "judge:humans" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "blue", "duplicated_from": null, "emoji": "πŸ”¬", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.28.3", "short_description": null, "title": "Open LLM Progress Tracker" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 280, "daysSinceModification": 27, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Visualizes LLM progress through LMSYS Arena ELO ratings over time.
672b47b91b5f7a5e97a0e631
Marqo/Ecommerce-Embedding-Benchmarks
Marqo
Ecommerce-Embedding-Benchmarks
2024-11-06T10:40:57
2024-11-11 15:57:46
0
17
[ "eval:performance", "test:public", "modality:image", "modality:text", "eval:generation" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "yellow", "duplicated_from": null, "emoji": "πŸ†", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "5.5.0", "short_description": null, "title": "Ecommerce Embedding Benchmarks" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 91, "daysSinceModification": 86, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Compares ecommerce embedding models on multimodal product retrieval tasks.
65c9a8e7dc38a2858a77ff8d
TIGER-Lab/GenAI-Arena
TIGER-Lab
GenAI-Arena
2024-02-12T05:13:11
2025-02-04 17:00:34
2
267
[ "modality:image", "eval:generation", "modality:video", "judge:humans" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "purple", "colorTo": "pink", "duplicated_from": null, "emoji": "πŸ“ˆ", "license": "mit", "pinned": true, "sdk": "gradio", "sdk_version": "4.41.0", "short_description": "Realtime Image/Video Gen AI Arena", "title": "GenAI Arena" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 359, "daysSinceModification": 1, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Evaluates visual AI models through human preference votes in arena battles.
65a2cd9890b5e87bcdf9f2e2
yutohub/japanese-chatbot-arena-leaderboard
yutohub
japanese-chatbot-arena-leaderboard
2024-01-13T17:51:20
2024-03-08 11:16:07
0
34
[ "eval:generation", "modality:text", "language:japanese" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "yellow", "colorTo": "pink", "duplicated_from": null, "emoji": "πŸŒ–", "license": null, "pinned": false, "sdk": "streamlit", "sdk_version": "1.30.0", "short_description": null, "title": "Japanese Chatbot Arena Leaderboard" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 389, "daysSinceModification": 334, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates Japanese Large Language Models through crowdsourced pairwise comparison in a chat arena format.
6670f4cffc615a6257ab35dd
ksort/K-Sort-Arena
ksort
K-Sort-Arena
2024-06-18T02:45:35
2025-01-07 03:00:09
0
45
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "purple", "colorTo": "pink", "duplicated_from": null, "emoji": "πŸ“ˆ", "license": "mit", "pinned": false, "sdk": "gradio", "sdk_version": "4.21.0", "short_description": "Efficient Image/Video K-Sort Arena", "title": "K-Sort Arena" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 232, "daysSinceModification": 29, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
664885ecb5e5f95dc65dc3d9
Auto-Arena/Leaderboard
Auto-Arena
Leaderboard
2024-05-18T10:41:48
2024-10-07 02:37:00
0
21
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "yellow", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": false, "sdk": "gradio", "sdk_version": "4.27.0", "short_description": null, "title": "Auto-Arena Leaderboard" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 263, "daysSinceModification": 121, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
67909d72a1832c8a7cdd4599
galileo-ai/agent-leaderboard
galileo-ai
agent-leaderboard
2025-01-22T07:25:38
2025-02-05 13:21:55
28
31
[ "modality:tools", "eval:generation", "judge:function" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "yellow", "colorTo": "purple", "duplicated_from": null, "emoji": "πŸ’¬", "license": "apache-2.0", "pinned": false, "sdk": "gradio", "sdk_version": "5.0.1", "short_description": "Ranking of LLMs for agentic tasks", "title": "Agent Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 14, "daysSinceModification": 0, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluating LLM capabilities in tool usage and functions.
665e7241f8cb81b0a476eccb
ArtificialAnalysis/Text-to-Image-Leaderboard
ArtificialAnalysis
Text-to-Image-Leaderboard
2024-06-04T01:47:45
2024-06-16 20:06:00
9
336
[]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "green", "colorTo": "green", "duplicated_from": null, "emoji": "πŸ“Š", "license": null, "pinned": false, "sdk": "static", "sdk_version": null, "short_description": null, "title": "Text To Image Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 246, "daysSinceModification": 234, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
643d3016d2c1e08a5eca0c22
open-llm-leaderboard/open_llm_leaderboard
open-llm-leaderboard
open_llm_leaderboard
2023-04-17T11:40:06
2025-01-10 19:24:50
58
12,386
[ "eval:performance", "test:public", "modality:text", "eval:generation", "judge:auto", "submission:automatic", "eval:math", "eval:code" ]
{}
{ "results": { "last_modified": "2025-02-05T13:05:51.000Z" } }
RUNNING
{ "app_file": null, "colorFrom": "blue", "colorTo": "red", "duplicated_from": "open-llm-leaderboard/open_llm_leaderboard", "emoji": "πŸ†", "license": "apache-2.0", "pinned": true, "sdk": "docker", "sdk_version": null, "short_description": "Track, rank and evaluate open LLMs and chatbots", "title": "Open LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 660, "daysSinceModification": 26, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 3 } }
true
Comparing Large Language Models in a reproducible way.
65af98551501453abf5d8e8d
opencompass/open_vlm_leaderboard
opencompass
open_vlm_leaderboard
2024-01-23T10:43:33
2025-01-20 06:49:40
11
593
[ "modality:image", "eval:generation", "judge:auto" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "green", "duplicated_from": null, "emoji": "🌎", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.44.1", "short_description": "VLMEvalKit Evaluation Results Collection ", "title": "Open VLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 379, "daysSinceModification": 16, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
65f0f612555caedb299e54d9
DontPlanToEnd/UGI-Leaderboard
DontPlanToEnd
UGI-Leaderboard
2024-03-13T00:40:50
2025-02-04 22:37:06
21
623
[ "modality:text", "eval:generation", "test:private", "submission:manual", "eval:safety", "language:English" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "gray", "colorTo": "purple", "duplicated_from": null, "emoji": "πŸ“’", "license": "apache-2.0", "pinned": false, "sdk": "docker", "sdk_version": null, "short_description": null, "title": "UGI Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": [ "generation", "safety" ], "language": [ "english" ], "modality": [ "text" ] }, "categoryCounts": { "eval": 2, "language": 1, "modality": 1 }, "categoryValues": { "judge": null, "submission": [ "manual" ], "test": [ "private" ] }, "daysSinceCreation": 329, "daysSinceModification": 0, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
true
662e9e1efa3959cbe30a35a6
ArtificialAnalysis/LLM-Performance-Leaderboard
ArtificialAnalysis
LLM-Performance-Leaderboard
2024-04-28T19:06:06
2024-06-11 20:46:38
4
274
[ "eval:performance", "modality:text", "judge:auto", "test:private", "submission:manual" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "purple", "colorTo": "purple", "duplicated_from": null, "emoji": "🐨", "license": null, "pinned": false, "sdk": "static", "sdk_version": null, "short_description": null, "title": "LLM Performance Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 283, "daysSinceModification": 238, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
A benchmark that evaluates LLM API providers by measuring their performance metrics including latency, speed, and quality across different workload scenarios.
633581939ac57cf2967be686
mteb/leaderboard
mteb
leaderboard
2022-09-29T11:29:23
2025-02-05 09:48:47
63
4,682
[ "modality:artefacts", "submission:semiautomatic" ]
{}
{ "results": { "last_modified": "2025-02-05T08:37:09.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "mit", "pinned": true, "sdk": "docker", "sdk_version": null, "short_description": null, "title": "MTEB Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 860, "daysSinceModification": 0, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 3 } }
true
An arena ranking LLMs on retrieval capabilities.
660bb9ccb75880c7c71ca46c
ZhangYuhan/3DGen-Arena
ZhangYuhan
3DGen-Arena
2024-04-02T07:54:52
2024-12-10 08:56:26
0
93
[ "test:public", "judge:humans", "modality:3d" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "indigo", "colorTo": "indigo", "duplicated_from": null, "emoji": "🐠", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "4.24.0", "short_description": null, "title": "3DGen Arena" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 309, "daysSinceModification": 57, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
3D Arena leaderboard evaluates generative 3D models.
64943e5108f840ed960f312a
optimum/llm-perf-leaderboard
optimum
llm-perf-leaderboard
2023-06-22T12:28:01
2025-02-03 11:16:07
6
416
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ†πŸ‹οΈ", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "5.14.0", "short_description": null, "title": "LLM-Perf Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 594, "daysSinceModification": 2, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
66fead0f3a221be1070a1ed5
open-llm-leaderboard/comparator
open-llm-leaderboard
comparator
2024-10-03T14:41:19
2025-01-09 15:13:23
3
84
[]
{}
{ "results": { "last_modified": "2025-02-05T13:05:51.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "gray", "colorTo": "green", "duplicated_from": null, "emoji": "πŸ†", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "4.44.1", "short_description": "Compare Open LLM Leaderboard results", "title": "Open LLM Leaderboard Model Comparator" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 125, "daysSinceModification": 27, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 3 } }
true
6468923b99182de17844bf7b
lmarena-ai/chatbot-arena-leaderboard
lmarena-ai
chatbot-arena-leaderboard
2023-05-20T09:26:19
2025-02-03 17:30:49
34
3,946
[ "modality:image", "modality:text", "eval: generation", "judge:humans" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "indigo", "colorTo": "green", "duplicated_from": null, "emoji": "πŸ†πŸ€–", "license": "apache-2.0", "pinned": false, "sdk": "gradio", "sdk_version": "4.44.1", "short_description": null, "title": "Chatbot Arena Leaderboard" }
[ "arena" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 627, "daysSinceModification": 2, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Chatbot Arena is an open-source platform for evaluating AI through human preference.
66918a9ed4d26f854abab9c5
ParsBench/leaderboard
ParsBench
leaderboard
2024-07-12T19:57:18
2024-11-06 20:27:24
3
37
[ "modality:text", "eval:generation", "language:persian", "submission:automatic" ]
{}
{ "results": { "last_modified": "2024-08-17T18:52:12.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "5.5.0", "short_description": null, "title": "Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": [ "generation" ], "language": [ "persian" ], "modality": [ "text" ] }, "categoryCounts": { "eval": 1, "language": 1, "modality": 1 }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 208, "daysSinceModification": 90, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": true, "isRunning": true }, "score": 4 } }
true
Compares Persian language models on diverse NLP tasks like reasoning, generation and understanding.
64f9e6dd59eae6df399ba1e9
hf-audio/open_asr_leaderboard
hf-audio
open_asr_leaderboard
2023-09-07T15:06:05
2024-11-22 23:31:43
5
608
[ "eval:performance", "test:public", "modality:audio", "judge:auto", "submission:semiautomatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "red", "colorTo": "blue", "duplicated_from": null, "emoji": "πŸ†", "license": null, "pinned": true, "sdk": "gradio", "sdk_version": "5.6.0", "short_description": null, "title": "Open ASR Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 517, "daysSinceModification": 74, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Evaluates English ASR model performance and speed on public benchmarks
65e124cf138bd34f8ebc927d
gorilla-llm/berkeley-function-calling-leaderboard
gorilla-llm
berkeley-function-calling-leaderboard
2024-03-01T00:43:59
2024-08-23 06:16:27
2
83
[ "modality:text", "modality:tools", "modality:agent", "judge:auto", "eval:code" ]
{}
null
RUNNING
{ "app_file": "index.html", "colorFrom": "red", "colorTo": "purple", "duplicated_from": null, "emoji": "πŸƒ", "license": "apache-2.0", "pinned": false, "sdk": "static", "sdk_version": null, "short_description": null, "title": "Berkeley Function Calling Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": [ "code" ], "language": null, "modality": [ "tools", "text" ] }, "categoryCounts": { "eval": 1, "language": null, "modality": 2 }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 341, "daysSinceModification": 166, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
true
Evaluates LLMs ability to call functions.
6613a26850350afe76d25129
la-leaderboard/la-leaderboard
la-leaderboard
la-leaderboard
2024-04-08T07:53:12
2024-12-16 10:53:19
2
67
[ "test:public", "language:galician", "modality:text", "eval:generation", "eval: generation", "judge:auto", "submission:automatic", "language:basque", "language:catalan", "language:spanish" ]
{}
{ "results": { "last_modified": "2024-10-18T15:39:43.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "yellow", "colorTo": "yellow", "duplicated_from": null, "emoji": "🌸", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.44.1", "short_description": "Evaluate open LLMs in the languages of LATAM and Spain.", "title": "La Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": [ "generation" ], "language": [ "spanish", "catalan", "basque", "galician" ], "modality": [ "text" ] }, "categoryCounts": { "eval": 1, "language": 4, "modality": 1 }, "categoryValues": { "judge": [ "auto" ], "submission": [ "automatic" ], "test": [ "public" ] }, "daysSinceCreation": 303, "daysSinceModification": 51, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": true, "isRunning": true }, "score": 4 } }
true
Evaluates LLM capabilities in Spanish varieties and official languages of Spain through comprehensive automated linguistic benchmarking across multiple regional languages.
672b762fd40b55aa6f62e8f2
elmresearchcenter/open_universal_arabic_asr_leaderboard
elmresearchcenter
open_universal_arabic_asr_leaderboard
2024-11-06T13:59:11
2025-02-04 07:54:52
3
18
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": "A benchmark for open-source multi-dialect Arabic ASR models", "title": "Open Universal Arabic Asr Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 91, "daysSinceModification": 1, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
64c7f2911f9614c3e88fc0e1
hf-vision/object_detection_leaderboard
hf-vision
object_detection_leaderboard
2023-07-31T17:42:41
2024-07-16 20:25:01
2
152
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ†", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "4.38.1", "short_description": null, "title": "Open Object Detection Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 555, "daysSinceModification": 203, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
65a7b7a7feb385935219fd72
mii-llm/open_ita_llm_leaderboard
mii-llm
open_ita_llm_leaderboard
2024-01-17T11:19:03
2024-12-09 10:21:36
2
68
[ "test:public", "judge:auto", "submission:automatic", "language:italian" ]
{}
{ "results": { "last_modified": "2025-02-04T13:48:39.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "yellow", "colorTo": "red", "duplicated_from": null, "emoji": "πŸ†", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.36.0", "short_description": "Track, rank and evaluate open LLMs in the italian language!", "title": "Open Ita Llm Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 385, "daysSinceModification": 58, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 3 } }
true
Italian capabilities LLM leaderboard.
647c02aeb31514a4a6ed3fe1
uonlp/open_multilingual_llm_leaderboard
uonlp
open_multilingual_llm_leaderboard
2023-06-04T03:19:10
2024-11-23 18:57:01
1
51
[ "test:public", "language:chinese", "language:marathi", "language:danish", "language:vietnamese", "submission:manual", "language:nepali", "language:telugu", "language:gujarati", "language:italian", "language:dutch", "language:serbian", "language:spanish", "language:slovak", "language:hungarian", "eval:generation", "language:swedish", "language:ukrainian", "language:indonesian", "language:tamil", "language:portuguese", "language:kannada", "modality:text", "language:croatian", "language:malayalam", "language:armenian", "language:catalan", "language:arabic", "language:hindi", "language:russian", "language:french", "language:german", "language:basque", "language:romanian", "language:bengali" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "purple", "colorTo": "indigo", "duplicated_from": null, "emoji": "🐨", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "5.5.0", "short_description": null, "title": "Open Multilingual Llm Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 612, "daysSinceModification": 74, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates Large Language Models' performance across 31 diverse languages using standardized benchmarks.
65650d01a0623adbd7387390
vectara/leaderboard
vectara
leaderboard
2023-11-27T21:41:21
2025-01-15 17:12:55
4
92
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.44.0", "short_description": null, "title": "HHEM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 435, "daysSinceModification": 21, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
66231fbfd323727f81a5bbec
SeaLLMs/LLM_Leaderboard_for_SEA
SeaLLMs
LLM_Leaderboard_for_SEA
2024-04-20T01:51:59
2024-12-10 12:29:34
2
18
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "yellow", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.27.0", "short_description": null, "title": "LLM Leaderboard for SEA" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 291, "daysSinceModification": 57, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
66bd6a9a359d1ee9690153b9
llm-jp/open-japanese-llm-leaderboard
llm-jp
open-japanese-llm-leaderboard
2024-08-15T02:40:26
2024-12-24 09:03:04
2
66
[ "language:Japanese", "test:public", "modality:text", "eval:generation", "language:ζ—₯本θͺž", "judge:auto", "submission:automatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "gray", "colorTo": "gray", "duplicated_from": null, "emoji": "🌸", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "5.9.1", "short_description": null, "title": "Open Japanese LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": [ "ζ—₯本θͺž", "japanese" ], "modality": null }, "categoryCounts": { "eval": null, "language": 2, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 174, "daysSinceModification": 43, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
true
The Open Japanese LLM Leaderboard by LLM-jp evaluates the performance of Japanese Large Language Models (LLMs) with more than 16 tasks from classical to modern NLP tasks.
65d70863ef58a69470ead2fc
openlifescienceai/open_medical_llm_leaderboard
openlifescienceai
open_medical_llm_leaderboard
2024-02-22T08:40:03
2025-01-29 06:03:29
5
329
[ "test:public", "modality:text", "eval:generation", "judge:auto", "domain:medical", "submission:automatic" ]
{}
{ "results": { "last_modified": "2025-01-29T05:54:02.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.4.0", "short_description": null, "title": "Open Medical-LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 349, "daysSinceModification": 7, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Evaluates LLMs across a diverse array of medical datasets.
64fad4e58d50404bc4ee667f
opencompass/opencompass-llm-leaderboard
opencompass
opencompass-llm-leaderboard
2023-09-08T08:01:41
2024-02-08 03:03:58
1
89
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "yellow", "duplicated_from": null, "emoji": "πŸš€", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "3.43.1", "short_description": null, "title": "OpenCompass LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 516, "daysSinceModification": 363, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
656e4128896aeb1168858cf5
Nexusflow/Nexus_Function_Calling_Leaderboard
Nexusflow
Nexus_Function_Calling_Leaderboard
2023-12-04T21:14:16
2024-03-22 00:27:33
1
89
[ "eval:generation", "modality:tools" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "pink", "colorTo": "indigo", "duplicated_from": null, "emoji": "🐠", "license": "apache-2.0", "pinned": false, "sdk": "gradio", "sdk_version": "4.7.1", "short_description": null, "title": "Nexus Function Calling Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 428, "daysSinceModification": 320, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates LLM capabilities in zero-shot function calling & A
64bea7e1f671da974e585dcf
bigcode/bigcode-models-leaderboard
bigcode
bigcode-models-leaderboard
2023-07-24T16:33:37
2024-11-11 20:36:38
22
1,104
[ "test:public", "modality:text", "judge:auto", "eval:code", "submission:semiautomatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "pink", "colorTo": "blue", "duplicated_from": null, "emoji": "πŸ“ˆ", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "4.36.1", "short_description": null, "title": "Big Code Models Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": [ "code" ], "language": null, "modality": null }, "categoryCounts": { "eval": 1, "language": null, "modality": null }, "categoryValues": { "judge": [ "auto" ], "submission": [ "semiautomatic" ], "test": [ "public" ] }, "daysSinceCreation": 562, "daysSinceModification": 85, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
true
Specialized leaderboard for models with coding capabilities πŸ–₯️(Evaluates on HumanEval and MultiPL-E)
6627574f6f29e1f14c937eec
MohamedRashad/arabic-tokenizers-leaderboard
MohamedRashad
arabic-tokenizers-leaderboard
2024-04-23T06:38:07
2024-12-10 03:04:42
1
28
[ "eval:performance", "modality:text", "modality:artefacts", "submission:automatic", "language:arabic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "green", "duplicated_from": null, "emoji": "⚑", "license": "apache-2.0", "pinned": false, "sdk": "gradio", "sdk_version": "5.8.0", "short_description": null, "title": "Arabic Tokenizers Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 288, "daysSinceModification": 57, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Compares the performance of various tokenizers on Arabic text.
662a871654a69f2d529d3987
OALL/Open-Arabic-LLM-Leaderboard
OALL
Open-Arabic-LLM-Leaderboard
2024-04-25T16:38:46
2024-12-23 20:41:05
1
121
[ "test:public", "modality:text", "judge:auto", "submission:automatic", "language:arabic" ]
{}
{ "results": { "last_modified": "2025-02-04T22:12:53.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ†", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.36.0", "short_description": "Track, rank and evaluate open Arabic LLMs and chatbots", "title": "Open Arabic LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 286, "daysSinceModification": 43, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 3 } }
true
LLM leaderboard comparing Arabic language models' performance across various benchmarks including reasoning, language understanding and cultural alignment, using zero-shot evaluation.
6630eeff792bfb20c922c4dd
finosfoundation/Open-Financial-LLM-Leaderboard
finosfoundation
Open-Financial-LLM-Leaderboard
2024-04-30T13:15:43
2025-01-23 18:22:45
1
59
[ "eval:performance", "modality:text", "domain:financial", "eval:generation", "language:english", "submission:automatic", "eval:math", "eval:code", "eval:safety", "language:spanish" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.42.0", "short_description": null, "title": "Open FinLLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 281, "daysSinceModification": 13, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates financial LLMs' capabilities across multiple domains using comprehensive benchmarks and zero-shot settings. ( yes spanish )
65adcd10d6b10af9119fc960
Vchitect/VBench_Leaderboard
Vchitect
VBench_Leaderboard
2024-01-22T02:04:00
2025-01-23 06:16:37
1
163
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "indigo", "colorTo": "pink", "duplicated_from": null, "emoji": "πŸ“Š", "license": "mit", "pinned": false, "sdk": "gradio", "sdk_version": "4.36.1", "short_description": null, "title": "VBench Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 380, "daysSinceModification": 13, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
659c951ed9a59ad53d6a9a37
mlabonne/Yet_Another_LLM_Leaderboard
mlabonne
Yet_Another_LLM_Leaderboard
2024-01-09T00:36:46
2024-06-16 22:12:56
1
185
[ "modality:text", "judge:auto", "submission:manual" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "red", "colorTo": "blue", "duplicated_from": null, "emoji": "πŸŒ–", "license": "apache-2.0", "pinned": true, "sdk": "docker", "sdk_version": null, "short_description": null, "title": "Yet Another LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 393, "daysSinceModification": 233, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
663e41b203c47894a6e53213
AIR-Bench/leaderboard
AIR-Bench
leaderboard
2024-05-10T15:48:02
2024-12-18 12:49:03
1
66
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.29.0", "short_description": null, "title": "AIR-Bench Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 271, "daysSinceModification": 49, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
65b0a64db233ea8ce65f0bc5
echo840/ocrbench-leaderboard
echo840
ocrbench-leaderboard
2024-01-24T05:55:25
2025-01-16 14:01:43
1
116
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "gray", "colorTo": "pink", "duplicated_from": null, "emoji": "πŸ†", "license": "mit", "pinned": false, "sdk": "gradio", "sdk_version": "4.15.0", "short_description": null, "title": "Ocrbench Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 378, "daysSinceModification": 20, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
6687c3599ea2b2e70cce52a5
openGPT-X/european-llm-leaderboard
openGPT-X
european-llm-leaderboard
2024-07-05T09:56:41
2024-09-14 15:30:01
1
88
[ "language:greek", "language:slovenian", "language:czech", "language:danish", "judge:auto", "language:italian", "language:dutch", "language:spanish", "language:slovak", "language:hungarian", "language:swedish", "language:english", "language:bulgarian", "language:portuguese", "language:finnish", "language:polish", "language:estonian", "language:latvian", "language:french", "language:german", "language:lithuanian", "language:romanian" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "blue", "duplicated_from": null, "emoji": "🌍", "license": "unknown", "pinned": false, "sdk": "gradio", "sdk_version": "4.19.2", "short_description": null, "title": "European Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 215, "daysSinceModification": 144, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Multilingual evaluation of LLMs across 21 European languages
66742a00ccd71b5bb784b85f
m42-health/clinical_ner_leaderboard
m42-health
clinical_ner_leaderboard
2024-06-20T13:09:20
2024-10-14 10:06:19
1
19
[ "eval:performance", "test:public", "modality:text", "language:english", "judge:auto", "submission:automatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "Clinical NER Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": [ "text" ] }, "categoryCounts": { "eval": null, "language": null, "modality": 1 }, "categoryValues": { "judge": [ "auto" ], "submission": [ "automatic" ], "test": [ "public" ] }, "daysSinceCreation": 230, "daysSinceModification": 114, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
true
Evaluates clinical NER LLM capabilities across diverse medical datasets using sophisticated token and span-based evaluation metrics. Named Clinical Entity Recognition Leaderboard.
660a51922862c0cea449bbb3
Cognitive-Lab/indic_llm_leaderboard
Cognitive-Lab
indic_llm_leaderboard
2024-04-01T06:17:54
2024-04-19 14:48:17
1
22
[ "modality:text", "eval:generation", "language:indic", "submission:semiautomatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "indigo", "colorTo": "green", "duplicated_from": null, "emoji": "πŸ”₯", "license": "gpl-3.0", "pinned": false, "sdk": "streamlit", "sdk_version": "1.32.2", "short_description": null, "title": "Indic Llm Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 310, "daysSinceModification": 292, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates Indic LLMs on text generation and understanding tasks in 7 Indian languages.
667b29b383f9e85330f260fa
vidore/vidore-leaderboard
vidore
vidore-leaderboard
2024-06-25T20:33:55
2024-12-05 10:28:29
1
106
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "mit", "pinned": true, "sdk": "gradio", "sdk_version": "4.37.1", "short_description": null, "title": "Vidore Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 224, "daysSinceModification": 62, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
65af75895c9e7ad7bb512d22
BAAI/open_cn_llm_leaderboard
BAAI
open_cn_llm_leaderboard
2024-01-23T08:15:05
2025-01-02 04:21:39
1
107
[]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ†", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.36.1", "short_description": null, "title": "Open Chinese LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 379, "daysSinceModification": 34, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
659d762f50c1bbee5be20c63
AI-Secure/llm-trustworthy-leaderboard
AI-Secure
llm-trustworthy-leaderboard
2024-01-09T16:37:03
2024-11-22 05:50:44
1
87
[ "modality:text", "submission:automatic", "eval:safety" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.37.1", "short_description": null, "title": "LLM Safety Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 393, "daysSinceModification": 75, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Bias, safety, toxicity, all those things that are important to test when your chatbot actually interacts with users
6639befd49238ebdde0dc911
Intel/low_bit_open_llm_leaderboard
Intel
low_bit_open_llm_leaderboard
2024-05-07T05:41:17
2024-12-23 06:20:55
1
163
[ "eval:performance", "test:public", "modality:text", "judge:auto", "modality:artefacts", "submission:automatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ†", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.31.5", "short_description": "Track, rank and evaluate open LLMs and chatbots", "title": "Low-bit Quantized Open LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 274, "daysSinceModification": 44, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
A benchmark that evaluates low-bit quantized LLMs across multiple tasks using standardized test sets, focusing on both model performance and quantization efficiency.
66b2e7ef523bf90aa7062503
ThaiLLM-Leaderboard/leaderboard
ThaiLLM-Leaderboard
leaderboard
2024-08-07T03:20:15
2024-11-16 12:07:12
1
41
[ "test:public", "judge:model", "modality:text", "eval:generation", "judge:auto", "submission:manual", "language:thai" ]
{}
{ "results": { "last_modified": "2025-02-01T18:31:26.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.26.0", "short_description": null, "title": "Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 182, "daysSinceModification": 81, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Evaluates Thai LLM capabilities across multiple linguistic benchmarks using diverse evaluation methods.
6738561119cbbe30918d6435
PartAI/open-persian-llm-leaderboard
PartAI
open-persian-llm-leaderboard
2024-11-16T08:21:37
2025-02-02 12:56:17
1
47
[ "modality:text", "language:persian" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "red", "colorTo": "red", "duplicated_from": null, "emoji": "πŸ…", "license": "apache-2.0", "pinned": false, "sdk": "gradio", "sdk_version": "4.42.0", "short_description": "Open Persian LLM Leaderboard", "title": "Open Persian LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 81, "daysSinceModification": 3, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
65f5da0cde5e636ca24f3083
hebrew-llm-leaderboard/leaderboard
hebrew-llm-leaderboard
leaderboard
2024-03-16T17:42:36
2025-01-20 16:38:30
1
30
[ "modality:text", "eval:generation", "judge:auto", "language:Hebrew", "submission:automatic", "test:mix" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.44.0", "short_description": null, "title": "Hebrew LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": [ "generation" ], "language": [ "hebrew" ], "modality": [ "text" ] }, "categoryCounts": { "eval": 1, "language": 1, "modality": 1 }, "categoryValues": { "judge": [ "auto" ], "submission": [ "automatic" ], "test": [ "mix" ] }, "daysSinceCreation": 326, "daysSinceModification": 16, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
true
67501af26098fd5ee69ca347
inceptionai/AraGen-Leaderboard
inceptionai
AraGen-Leaderboard
2024-12-04T09:03:46
2025-02-02 10:34:16
1
24
[ "modality:text", "eval:generation", "submission:automatic", "language:arabic", "eval:safety" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "yellow", "colorTo": "purple", "duplicated_from": null, "emoji": "πŸ“Š", "license": null, "pinned": true, "sdk": "gradio", "sdk_version": "5.7.1", "short_description": "Generative Tasks Evaluation of Arabic LLMs", "title": "AraGen Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 63, "daysSinceModification": 3, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates Arabic chat LLMs on generation & safety using 3C3H
6477a87af911e9e76c68efc9
qiantong-xu/toolbench-leaderboard
qiantong-xu
toolbench-leaderboard
2023-05-31T20:05:14
2023-11-13 18:07:56
2
65
[ "eval:generation", "judge:function", "modality:tools" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "red", "colorTo": "green", "duplicated_from": null, "emoji": "⚑", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "3.32.0", "short_description": null, "title": "Toolbench Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 616, "daysSinceModification": 450, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates LLMs on API function calling tasks.
677f99fe8d5985fec9dcaea3
omlab/open-agent-leaderboard
omlab
open-agent-leaderboard
2025-01-09T09:42:22
2025-01-24 07:18:13
1
11
[ "eval:performance", "modality:text", "modality:agent", "judge:auto", "eval:math", "submission:semiautomatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "green", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "mit", "pinned": true, "sdk": "gradio", "sdk_version": "4.44.1", "short_description": "Open Agent Leaderboard", "title": "Open Agent Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 27, "daysSinceModification": 12, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Compares the math reasoning capabilities and performance of conversational agents.
654dc45a956e2f124cdfba5a
mesolitica/malay-llm-leaderboard
mesolitica
malay-llm-leaderboard
2023-11-10T05:49:14
2024-06-15 10:17:46
0
8
[ "modality:text", "language:malay" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "red", "colorTo": "purple", "duplicated_from": null, "emoji": "πŸ†πŸ‡²πŸ‡ΎπŸ€–", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "4.31.2", "short_description": null, "title": "Malay LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 453, "daysSinceModification": 235, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
655e4654fcbe7329b92cf372
mesolitica/malaysian-embedding-leaderboard
mesolitica
malaysian-embedding-leaderboard
2023-11-22T18:20:04
2024-09-22 14:46:15
0
6
[ "test:public", "modality:text", "eval:generation", "judge:auto", "language:malay", "submission:semiautomatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "yellow", "duplicated_from": null, "emoji": "πŸ†πŸ‡²πŸ‡ΎπŸ“‹", "license": null, "pinned": false, "sdk": "gradio", "sdk_version": "4.5.0", "short_description": null, "title": "Malaysian Embedding Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 441, "daysSinceModification": 136, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates Malay language embedding models on text retrieval tasks using public datasets.
64dc410a96f0f217e44772d8
AILab-CVC/SEED-Bench_Leaderboard
AILab-CVC
SEED-Bench_Leaderboard
2023-08-16T03:22:50
2025-01-21 09:07:19
0
81
[ "test:public", "modality:image", "modality:text", "eval:generation", "judge:function", "modality:video", "submission:semiautomatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "gray", "colorTo": "blue", "duplicated_from": null, "emoji": "πŸ†", "license": "cc-by-4.0", "pinned": false, "sdk": "gradio", "sdk_version": "3.40.1", "short_description": null, "title": "SEED-Bench Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 539, "daysSinceModification": 15, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates multimodal LLMs on text/image/video understanding.
648b40be34fee97b500a7975
ml-energy/leaderboard
ml-energy
leaderboard
2023-06-15T16:47:58
2024-10-04 17:57:22
0
8
[ "eval:performance", "test:public", "modality:image", "modality:text", "judge:auto", "submission:automatic", "modality:video" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": null, "colorTo": null, "duplicated_from": null, "emoji": "⚑", "license": null, "pinned": true, "sdk": "gradio", "sdk_version": "3.39.0", "short_description": null, "title": "ML.ENERGY Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 601, "daysSinceModification": 124, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Evaluates GenAI models' energy consumption and inference performance
65d23c50053a863f53aaa719
sam-paech/EQ-Bench-Leaderboard
sam-paech
EQ-Bench-Leaderboard
2024-02-18T17:20:16
2024-10-01 04:56:15
0
21
[ "modality:text", "eval:generation", "eval:safety" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "yellow", "colorTo": "purple", "duplicated_from": null, "emoji": "πŸ’—", "license": "mit", "pinned": false, "sdk": "static", "sdk_version": null, "short_description": null, "title": "EQ Bench" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 353, "daysSinceModification": 127, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Compares LLMs on emotional intelligence, creative writing, and judging creative writing. ( magi-hard, creative writing, judgemark )
65b3ccae16301f403033baac
logikon/open_cot_leaderboard
logikon
open_cot_leaderboard
2024-01-26T15:15:58
2024-11-02 11:06:00
0
50
[ "test:public", "modality:text", "eval:generation", "submission:automatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "red", "colorTo": "yellow", "duplicated_from": "logikon/open_cot_leaderboard", "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.36.0", "short_description": "Track, rank and evaluate open LLMs' CoT quality", "title": "Open CoT Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 376, "daysSinceModification": 95, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Evaluates Large Language Models' chain-of-thought reasoning performance across multiple logical reasoning tasks.
65824791b1c4ab777ae0d6b7
MERaLiON/SeaEval_Leaderboard
MERaLiON
SeaEval_Leaderboard
2023-12-20T01:46:57
2024-12-31 05:16:16
0
7
[ "language:chinese", "language:filipino", "modality:text", "eval:generation", "language:english", "language:indonesian", "language:vietnamese", "language:malay", "language:spanish" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": false, "sdk": "streamlit", "sdk_version": "1.36.0", "short_description": null, "title": "Leaderboard / SeaEval" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 413, "daysSinceModification": 36, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates multilingual LLMs on language understanding, reasoning and cultural knowledge.
660e4753e8763d8d1db1c465
livecodebench/leaderboard
livecodebench
leaderboard
2024-04-04T06:23:15
2024-06-07 06:38:04
0
34
[ "eval:code", "eval:generation", "test:public" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "🐠", "license": "mit", "pinned": false, "sdk": "static", "sdk_version": null, "short_description": null, "title": "Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 307, "daysSinceModification": 243, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates Large Language Models' code generation capabilities across different difficulty levels using holistic and contamination-free benchmarking.
65f42c08e364a7d45b73f76c
sparse-generative-ai/open-moe-llm-leaderboard
sparse-generative-ai
open-moe-llm-leaderboard
2024-03-15T11:07:52
2024-08-13 09:30:40
0
32
[ "eval:performance", "test:public", "modality:text", "eval:generation", "judge:auto", "submission:automatic", "eval:math" ]
{}
{ "results": { "last_modified": "2024-08-26T08:47:37.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ”₯", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.26.0", "short_description": null, "title": "OPEN-MOE-LLM-LEADERBOARD" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 327, "daysSinceModification": 176, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 3 } }
true
A leaderboard evaluating performance and efficiency metrics of open-source Mixture of Experts (MoE) LLMs across multiple benchmarks.
65e5e7b2a87482d11980782d
Intel/powered_by_intel_llm_leaderboard
Intel
powered_by_intel_llm_leaderboard
2024-03-04T15:24:34
2025-01-23 12:25:01
0
38
[ "test:public", "modality:text", "eval:generation", "judge:auto", "submission:semiautomatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "green", "duplicated_from": null, "emoji": "πŸ’»", "license": "apache-2.0", "pinned": false, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "Powered By Intel Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": [ "generation" ], "language": null, "modality": [ "text" ] }, "categoryCounts": { "eval": 1, "language": null, "modality": 1 }, "categoryValues": { "judge": [ "auto" ], "submission": [ "semiautomatic" ], "test": [ "public" ] }, "daysSinceCreation": 338, "daysSinceModification": 13, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
true
66214a6b89877a1889be628f
occiglot/euro-llm-leaderboard
occiglot
euro-llm-leaderboard
2024-04-18T16:29:31
2024-10-09 11:04:19
0
47
[ "test:public", "language:french", "modality:text", "eval:generation", "language:german", "language:english", "submission:automatic", "language:italian", "language:dutch", "language:spanish" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.36.1", "short_description": null, "title": "Occiglot Euro LLM Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 293, "daysSinceModification": 119, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates Large Language Models' performance across multiple European languages using machine-translated benchmarks.
65a2d7dcb4f188a4db12dc94
NPHardEval/NPHardEval-leaderboard
NPHardEval
NPHardEval-leaderboard
2024-01-13T18:35:08
2024-02-05 22:44:01
0
52
[ "test:public", "modality:text", "judge:auto", "submission:automatic", "eval:math", "eval:code" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.4.0", "short_description": null, "title": "NPHardEval Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 389, "daysSinceModification": 365, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Evaluates LLM reasoning on computational complexity problems.
66039ba97650c6c4369aceb8
instructkr/LogicKor-leaderboard
instructkr
LogicKor-leaderboard
2024-03-27T04:08:09
2024-03-27 04:13:08
0
34
[ "test:public", "language:korean", "modality:text", "eval:generation" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "yellow", "colorTo": "red", "duplicated_from": null, "emoji": "πŸ”₯πŸ“Š", "license": "apache-2.0", "pinned": true, "sdk": "static", "sdk_version": null, "short_description": null, "title": "LogicKor Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 315, "daysSinceModification": 315, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates Korean Large Language Models' performance across multiple reasoning and language tasks. This project is not maintained anymore.
6752a095c341d1f41069ec61
maum-ai/KOFFVQA-Leaderboard
maum-ai
KOFFVQA-Leaderboard
2024-12-06T06:58:29
2025-02-05 02:16:09
0
6
[ "modality:image", "modality:text", "language:korean", "eval:generation" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "purple", "duplicated_from": null, "emoji": "πŸ†", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "5.5.0", "short_description": null, "title": "KOFFVQA Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 61, "daysSinceModification": 0, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
true
Evaluates Korean Vision-Language Models on visual question answering.
65944138a260709928710fb6
allenai/reward-bench
allenai
reward-bench
2024-01-02T17:00:40
2024-12-11 20:55:17
3
323
[ "eval:performance", "test:public", "modality:text", "judge:auto", "eval:safety" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "pink", "colorTo": "blue", "duplicated_from": null, "emoji": "πŸ“", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.36.0", "short_description": null, "title": "Reward Bench Leaderboard" }
[ "tag:leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 400, "daysSinceModification": 55, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Evaluates reward models across chat, safety and reasoning tasks.
67695a9a4f03e8728cbfb199
adyen/DABstep
adyen
DABstep
2024-12-23T12:42:02
2025-02-04 15:00:52
11
15
[ "test:public", "modality:text", "eval:generation", "judge:auto", "submission:automatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "yellow", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ•Ί", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": "DABstep Reasoning Benchmark Leaderboard", "title": "DABstep Leaderboard" }
[ "tag:leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 44, "daysSinceModification": 1, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
A benchmark that measures LLMs' ability to perform data analysis by evaluating their answers to questions about multiple documents.
66c853076058eb4e6bb491b9
speakleash/polish_medical_leaderboard
speakleash
polish_medical_leaderboard
2024-08-23T09:14:47
2024-09-15 19:43:45
0
7
[ "language:polish", "test:public", "modality:text", "eval:generation", "judge:auto", "domain:medical", "submission:manual" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "gray", "colorTo": "red", "duplicated_from": null, "emoji": "πŸ‡΅πŸ‡±πŸ©ΊπŸ†", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.36.0", "short_description": null, "title": "Polish Medical Leaderboard" }
[ "leaderboard" ]
{ "categoryAllValues": { "eval": [ "generation" ], "language": [ "polish" ], "modality": [ "text" ] }, "categoryCounts": { "eval": 1, "language": 1, "modality": 1 }, "categoryValues": { "judge": [ "auto" ], "submission": [ "manual" ], "test": [ "public" ] }, "daysSinceCreation": 166, "daysSinceModification": 143, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
true
The leaderboard evaluates language models on Polish Board Certification Examinations.
65e8f3af5686ed1f5ec30cdc
allenai/WildBench
allenai
WildBench
2024-03-06T22:52:31
2024-08-06 05:40:31
1
221
[ "test:public", "judge:model", "modality:text", "eval:generation", "eval:math", "eval:code", "submission:semiautomatic" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "yellow", "duplicated_from": null, "emoji": "🦁", "license": null, "pinned": true, "sdk": "gradio", "sdk_version": "4.19.2", "short_description": null, "title": "AI2 WildBench Leaderboard (V2)" }
[ "tag:leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 335, "daysSinceModification": 183, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Evaluates LLMs on real-world tasks across multiple capabilities.
65fdbb08b9d70ef8298cd350
antoinelouis/decouvrir
antoinelouis
decouvrir
2024-03-22T17:08:24
2024-09-03 13:11:14
0
10
[ "language:french", "modality:text", "eval:rag", "language:French", "judge:auto", "modality:artifact", "submission:manual" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "4.21.0", "short_description": "Leaderboard of information retrieval models in French", "title": "DΓ©couvrIR" }
[ "tag:leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": [ "french" ], "modality": [ "text" ] }, "categoryCounts": { "eval": null, "language": 1, "modality": 1 }, "categoryValues": { "judge": null, "submission": [ "manual" ], "test": null }, "daysSinceCreation": 320, "daysSinceModification": 155, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
true
Evaluates French IR models performance on passage retrieval tasks
670ed70fd75f1143525d9a33
latticeflow/compl-ai-board
latticeflow
compl-ai-board
2024-10-15T20:56:47
2024-12-02 14:06:52
0
24
[ "domain:legal", "test:public", "modality:text", "judge:auto", "submission:automatic", "eval:safety" ]
{}
{ "results": { "last_modified": "2024-10-16T13:12:52.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "5.4.0", "short_description": null, "title": "EU AI Act Compliance Leaderboard" }
[ "tag:leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 112, "daysSinceModification": 65, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 3 } }
true
Evaluates LLM compliance with EU AI Act technical requirements & safety standards.
662e0e6445eb426d06590b55
speakleash/mt-bench-pl
speakleash
mt-bench-pl
2024-04-28T08:52:52
2024-10-25 19:54:43
0
20
[ "language:polish", "test:public", "modality:text", "judge:model", "eval:generation", "eval:math", "submission:manual", "eval:code" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "yellow", "colorTo": "pink", "duplicated_from": null, "emoji": "πŸ“ŠπŸ‡΅πŸ‡±", "license": "other", "pinned": true, "sdk": "gradio", "sdk_version": "4.31.4", "short_description": null, "title": "MT Bench PL" }
[ "tag:leaderboard" ]
{ "categoryAllValues": { "eval": [ "generation" ], "language": [ "polish" ], "modality": [ "text" ] }, "categoryCounts": { "eval": 1, "language": 1, "modality": 1 }, "categoryValues": { "judge": [ "model" ], "submission": [ "manual" ], "test": null }, "daysSinceCreation": 283, "daysSinceModification": 103, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
true
Evaluates Polish LLM capabilities across diverse linguistic and cognitive tasks using a specialized, culturally-adapted benchmarking methodology.
6690283166f3099d1265f6b7
allenai/ZebraLogic
allenai
ZebraLogic
2024-07-11T18:45:05
2024-11-05 22:49:28
0
84
[ "test:public", "modality:text", "judge:auto", "submission:automatic", "eval:math" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "blue", "colorTo": "yellow", "duplicated_from": null, "emoji": "πŸ¦“", "license": null, "pinned": true, "sdk": "gradio", "sdk_version": "4.19.2", "short_description": null, "title": "Zebra Logic Bench" }
[ "tag:leaderboard" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 209, "daysSinceModification": 91, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
true
Evaluates LLM logical reasoning on puzzle-solving tasks.
65becbc4744da3e639da88d9
HaizeLabs/red-teaming-resistance-benchmark
HaizeLabs
red-teaming-resistance-benchmark
2024-02-03T23:27:00
2024-06-07 18:34:09
1
41
[ "static" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "pink", "colorTo": "red", "duplicated_from": null, "emoji": "πŸ’»", "license": null, "pinned": false, "sdk": "static", "sdk_version": null, "short_description": null, "title": "Redteaming Resistance Leaderboard" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 367, "daysSinceModification": 243, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
null
null
65eb1aa621ddaff47070d2ad
Xenova/webgpu-embedding-benchmark
Xenova
webgpu-embedding-benchmark
2024-03-08T14:03:18
2024-03-12 14:33:10
1
59
[ "static" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "blue", "colorTo": "yellow", "duplicated_from": null, "emoji": "🐠", "license": null, "pinned": false, "sdk": "static", "sdk_version": null, "short_description": null, "title": "WebGPU Embedding Benchmark" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 334, "daysSinceModification": 330, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
null
null
647848ca9c1f42c1f4d7e033
gaia-benchmark/leaderboard
gaia-benchmark
leaderboard
2023-06-01T07:29:14
2025-01-30 07:53:25
15
246
[ "gradio", "leaderboard", "region:us" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "yellow", "colorTo": "indigo", "duplicated_from": null, "emoji": "🦾", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "GAIA Leaderboard" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 615, "daysSinceModification": 6, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 2 } }
null
null
65f6886bbe57cd07cb51aa8f
Marqo/CLIP-benchmarks
Marqo
CLIP-benchmarks
2024-03-17T06:06:35
2024-08-07 06:24:27
1
11
[ "streamlit" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "red", "colorTo": "green", "duplicated_from": null, "emoji": "🌍", "license": "apache-2.0", "pinned": false, "sdk": "streamlit", "sdk_version": "1.25.0", "short_description": null, "title": "CLIP Benchmarks" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 325, "daysSinceModification": 182, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
null
null
6687e8366d98cab219a29b72
ttsds/benchmark
ttsds
benchmark
2024-07-05T12:33:58
2024-08-31 19:50:13
1
20
[ "gradio", "leaderboard", "submission:semiautomatic", "test:public", "judge:auto", "modality:audio", "eval:generation", "tts" ]
{}
{ "results": { "last_modified": "2024-11-19T16:49:02.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "mit", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": "Text-To-Speech (TTS) Evaluation using objective metrics.", "title": "TTSDS Benchmark and Leaderboard" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": [ "generation" ], "language": null, "modality": [ "audio" ] }, "categoryCounts": { "eval": 1, "language": null, "modality": 1 }, "categoryValues": { "judge": [ "auto" ], "submission": [ "semiautomatic" ], "test": [ "public" ] }, "daysSinceCreation": 215, "daysSinceModification": 158, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": true, "isRunning": true }, "score": 4 } }
null
null
666f8193d148ca0bcfbca2ed
Intel/UnlearnDiffAtk-Benchmark
Intel
UnlearnDiffAtk-Benchmark
2024-06-17T00:21:39
2025-02-04 15:58:57
1
7
[ "gradio", "region:us" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "UnlearnDiffAtk Benchmark" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 233, "daysSinceModification": 1, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": true, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
null
null
66b4896bcc8441dc730567e5
panuthept/thai_sentence_embedding_benchmark
panuthept
thai_sentence_embedding_benchmark
2024-08-08T09:01:31
2024-08-08 16:22:44
1
12
[ "gradio" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "green", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ₯‡", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "Thai Sentence Embedding Benchmark" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 181, "daysSinceModification": 181, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
null
null
656449ea771319d93b10fe07
protectai/prompt-injection-benchmark
protectai
prompt-injection-benchmark
2023-11-27T07:48:58
2024-11-20 17:26:06
1
13
[ "gradio" ]
{}
null
RUNNING
{ "app_file": null, "colorFrom": "yellow", "colorTo": "gray", "duplicated_from": null, "emoji": "πŸ“", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": "5.6.0", "short_description": null, "title": "Prompt Injection Detection Benchmark" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 436, "daysSinceModification": 77, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
null
null
670cdeb1dec346e701aa390b
m42-health/MEDIC-Benchmark
m42-health
MEDIC-Benchmark
2024-10-14T09:04:49
2025-01-20 08:03:24
1
5
[ "gradio", "leaderboard", "submission:automatic", "test:public", "judge:auto", "modality:text" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "purple", "colorTo": "yellow", "duplicated_from": null, "emoji": "πŸ“Š", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "MEDIC Benchmark" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": [ "text" ] }, "categoryCounts": { "eval": null, "language": null, "modality": 1 }, "categoryValues": { "judge": [ "auto" ], "submission": [ "automatic" ], "test": [ "public" ] }, "daysSinceCreation": 114, "daysSinceModification": 16, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": false, "hasTags": true, "isRunning": true }, "score": 3 } }
null
null
6730c9858b5a645918504e5b
StarscreamDeceptions/Multilingual-MMLU-Benchmark-Leaderboard
StarscreamDeceptions
Multilingual-MMLU-Benchmark-Leaderboard
2024-11-10T14:56:05
2024-11-25 07:51:11
1
10
[ "gradio", "multilingual", "benchmark", "MMMLU", "leaderboard", "machine learning" ]
{}
{ "results": { "last_modified": "2024-11-13T16:41:46.000Z" } }
RUNNING
{ "app_file": "app.py", "colorFrom": "pink", "colorTo": "purple", "duplicated_from": null, "emoji": "πŸ†", "license": "apache-2.0", "pinned": true, "sdk": "gradio", "sdk_version": null, "short_description": null, "title": "🌐 Multilingual MMLU Benchmark Leaderboard" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 87, "daysSinceModification": 72, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": true, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": true, "hasRecentResults": false, "hasResults": true, "hasTags": false, "isRunning": true }, "score": 3 } }
null
null
650072b768c6cc778ce4d0aa
optimum/auto-benchmark
optimum
auto-benchmark
2023-09-12T14:16:23
2024-09-27 08:55:34
0
12
[ "gradio" ]
{}
null
RUNNING
{ "app_file": "app.py", "colorFrom": "purple", "colorTo": "indigo", "duplicated_from": null, "emoji": "πŸ‹οΈ", "license": "apache-2.0", "pinned": false, "sdk": "gradio", "sdk_version": "4.44.0", "short_description": null, "title": "Auto Benchmark" }
[ "benchmark" ]
{ "categoryAllValues": { "eval": null, "language": null, "modality": null }, "categoryCounts": { "eval": null, "language": null, "modality": null }, "categoryValues": { "judge": null, "submission": null, "test": null }, "daysSinceCreation": 512, "daysSinceModification": 131, "daysSinceResultsUpdate": null, "hasRecentResults": false, "hasResults": false, "isNew": false, "isRecentlyUpdated": false, "quality": { "flags": { "hasLeaderboardOrArenaTag": false, "hasRecentResults": false, "hasResults": false, "hasTags": false, "isRunning": true }, "score": 1 } }
null
null
End of preview.

No dataset card yet

Downloads last month
1,488