Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@ license: mit
|
|
3 |
language:
|
4 |
- en
|
5 |
---
|
6 |
-
# Dataset Card for
|
7 |
|
8 |
## Dataset Description
|
9 |
|
@@ -14,11 +14,12 @@ language:
|
|
14 |
### Dataset Summary
|
15 |
|
16 |
AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams.
|
17 |
-
|
18 |
|
19 |
|
20 |
### Citation Information
|
21 |
|
|
|
22 |
```
|
23 |
@misc{zhong2023agieval,
|
24 |
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
|
|
|
3 |
language:
|
4 |
- en
|
5 |
---
|
6 |
+
# Dataset Card for AGIEval
|
7 |
|
8 |
## Dataset Description
|
9 |
|
|
|
14 |
### Dataset Summary
|
15 |
|
16 |
AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams.
|
17 |
+
|
18 |
|
19 |
|
20 |
### Citation Information
|
21 |
|
22 |
+
Dataset taken from https://github.com/microsoft/AGIEval
|
23 |
```
|
24 |
@misc{zhong2023agieval,
|
25 |
title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models},
|