Spaces:
Running
Running
Update src/about.py
Browse files- src/about.py +2 -0
src/about.py
CHANGED
@@ -69,6 +69,8 @@ Addressing the gaps in existing LLM evaluation frameworks, this benchmark is spe
|
|
69 |
|
70 |
This benchmark not only fills a critical gap in Persian LLM evaluation but also provides a standardized leaderboard to track progress in developing aligned, ethical, and culturally aware Persian language models.
|
71 |
|
|
|
|
|
72 |
"""
|
73 |
|
74 |
EVALUATION_QUEUE_TEXT = """
|
|
|
69 |
|
70 |
This benchmark not only fills a critical gap in Persian LLM evaluation but also provides a standardized leaderboard to track progress in developing aligned, ethical, and culturally aware Persian language models.
|
71 |
|
72 |
+
### Download Dataset
|
73 |
+
The full dataset is not publicly accessible; however, you can download a sample of 1,500 entries [here](https://huggingface.co/datasets/MCILAB/1500_sampel/tree/main). The distribution of this sample is as follows:
|
74 |
"""
|
75 |
|
76 |
EVALUATION_QUEUE_TEXT = """
|