Add more detail to README.md on use cases & non use cases for the dataset.
Browse files
README.md
CHANGED
@@ -32,17 +32,22 @@ Where Everyday Human Reasoning Still Surpasses Frontier Models.
|
|
32 |
|
33 |
## Uses
|
34 |
|
35 |
-
|
|
|
36 |
|
37 |
### Direct Use
|
38 |
|
39 |
-
|
40 |
|
41 |
-
|
42 |
|
43 |
### Out-of-Scope Use
|
44 |
|
45 |
-
|
|
|
|
|
|
|
|
|
46 |
|
47 |
## Dataset Structure
|
48 |
|
|
|
32 |
|
33 |
## Uses
|
34 |
|
35 |
+
This tiny dataset(10 entries), comprises of tasks that are easy for humans but still difficult for LLMs, the dataset gets updated by the SimpleBench team.
|
36 |
+
This is only the publicly available version of the dataset.
|
37 |
|
38 |
### Direct Use
|
39 |
|
40 |
+
- Intended use is quite niche, its goal is to evaluate LLM's against problems humans find easy to solve with pen & paper, while being difficult for LLM's.
|
41 |
|
42 |
+
- Its simple to evaluate due to it being only 10 questions, due to it being multiple choice, its easy to automate too, (no Judge LLM needed, just extract the guessed option).
|
43 |
|
44 |
### Out-of-Scope Use
|
45 |
|
46 |
+
- Needless to say, advise against training your model on this dataset (model contamination is bad), or other immoral/illegal uses.
|
47 |
+
|
48 |
+
- It is not ideal for most real world use cases due to the small sample size (10 Q&A pairs).
|
49 |
+
|
50 |
+
- Its also multiple choice, with a 1 in 6 chance of guessing correct by chance, false positives can happen.
|
51 |
|
52 |
## Dataset Structure
|
53 |
|