Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
dubniczky commited on
Commit
ce6a97b
·
verified ·
1 Parent(s): 4d6431e

Added more details about the dataset

Browse files
Files changed (1) hide show
  1. README.md +70 -69
README.md CHANGED
@@ -9,9 +9,9 @@ tags:
9
  - ctf
10
  pretty_name: Dynamic Intelligence Assessment Dataset
11
  size_categories:
12
- - 1K<n<15K
13
  ---
14
- # Dataset Card for Dataset Name
15
 
16
  <!-- Provide a quick summary of the dataset. -->
17
  This dataset aims to test the problem-solving ability of LLMs with dynamically generated challenges that are difficult to guess.
@@ -20,18 +20,12 @@ This dataset aims to test the problem-solving ability of LLMs with dynamically g
20
 
21
  ### Dataset Description
22
 
23
- <!-- Provide a longer summary of what this dataset is. -->
24
-
25
- As machine intelligence evolves, the need to test and compare the problem-solving abilities of different AI models grows.
26
- However, current benchmarks are often simplistic, allowing models to perform uniformly well and making it difficult to distinguish their capabilities. Additionally, benchmarks typically rely on static question-answer pairs that the models might memorize or guess.
27
- To address these limitations, we introduce Dynamic Intelligence Assessment (DIA), a novel methodology for testing AI models using dynamic question templates and improved metrics across multiple disciplines such as mathematics, cryptography, cybersecurity, and computer science. The accompanying dataset, DIA-Bench, contains a diverse collection of challenge templates with mutable parameters presented in various formats, including text, PDFs, compiled binaries, visual puzzles, and CTF-style cybersecurity challenges. Our framework introduces four new metrics to assess a model’s reliability and confidence across multiple attempts. These metrics revealed that even simple questions are frequently answered incorrectly when posed in varying forms, highlighting significant gaps in models' reliability. Notably, API models like GPT-4o often overestimated their mathematical capabilities, while ChatGPT-4o demonstrated better performance due to effective tool usage. In self-assessment OpenAI's o1-mini proved to have the best judgement on what tasks it should attempt to solve.
28
- We evaluated 25 state-of-the-art LLMs using DIA-Bench, showing that current models struggle with complex tasks and often display unexpectedly low confidence, even with simpler questions. The DIA framework sets a new standard for assessing not only problem-solving, but also a model's adaptive intelligence and ability to assess its limitations. The dataset is publicly available on the project's page: https://github.com/DIA-Bench
29
 
30
 
31
  - **Curated by:** Norbert Tihanyi, Tamas Bisztray, Richard A. Dubniczky, Rebeka Toth, Bertalan Borsos, Bilel Cherif, Ridhi Jain, Lajos Muzsai, Mohamed Amine Ferrag, Ryan Marinelli, Lucas C. Cordeiro, Merouane Debbah, Vasileios Mavroeidis, and Audun Josang
32
  <!-- - **Funded by:** [More Information Needed] -->
33
- - **Shared by [optional]:** [More Information Needed]
34
- - **Language(s) (NLP):** English
35
  - **License:** AL 2.0
36
 
37
  ### Dataset Sources
@@ -44,14 +38,62 @@ We evaluated 25 state-of-the-art LLMs using DIA-Bench, showing that current mode
44
  ## Uses
45
 
46
  <!-- Address questions around how the dataset is intended to be used. -->
47
- This dataset is intended to be used as a benchmark to test the problem-solving ability and confidence of LLMs.
 
 
 
48
 
49
 
50
  ## Dataset Structure
51
 
52
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
53
 
54
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
  ## Dataset Creation
57
 
@@ -59,37 +101,12 @@ This dataset is intended to be used as a benchmark to test the problem-solving a
59
 
60
  <!-- Motivation for the creation of this dataset. -->
61
 
62
- [More Information Needed]
63
 
64
  ### Source Data
65
 
66
  <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
67
-
68
- #### Data Collection and Processing
69
-
70
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
71
-
72
- [More Information Needed]
73
-
74
- #### Who are the source data producers?
75
-
76
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
77
-
78
- [More Information Needed]
79
-
80
- ### Annotations [optional]
81
-
82
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
83
-
84
- #### Annotation process
85
-
86
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
87
-
88
- [More Information Needed]
89
-
90
- #### Who are the annotators?
91
-
92
- <!-- This section describes the people or systems who created the annotations. -->
93
 
94
  [More Information Needed]
95
 
@@ -97,19 +114,15 @@ This dataset is intended to be used as a benchmark to test the problem-solving a
97
 
98
  <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
99
 
 
 
100
  [More Information Needed]
101
 
102
  ## Bias, Risks, and Limitations
103
 
104
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
105
 
106
- [More Information Needed]
107
-
108
- ### Recommendations
109
-
110
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
111
-
112
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
113
 
114
  ## Citation [optional]
115
 
@@ -117,26 +130,14 @@ Users should be made aware of the risks, biases and limitations of the dataset.
117
 
118
  **BibTeX:**
119
 
120
- [More Information Needed]
121
-
122
- **APA:**
123
-
124
- [More Information Needed]
125
-
126
- ## Glossary [optional]
127
-
128
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
129
-
130
- [More Information Needed]
131
-
132
- ## More Information [optional]
133
-
134
- [More Information Needed]
135
-
136
- ## Dataset Card Authors [optional]
137
-
138
- [More Information Needed]
139
-
140
- ## Dataset Card Contact
141
-
142
- [More Information Needed]
 
9
  - ctf
10
  pretty_name: Dynamic Intelligence Assessment Dataset
11
  size_categories:
12
+ - n<15K
13
  ---
14
+ # Dynamic Intelligence Assessment Dataset
15
 
16
  <!-- Provide a quick summary of the dataset. -->
17
  This dataset aims to test the problem-solving ability of LLMs with dynamically generated challenges that are difficult to guess.
 
20
 
21
  ### Dataset Description
22
 
23
+ The DIA Benchmark Dataset is a benchmarking tool consisting of 150 dynamic question generators for the evaluation of the problem-solving capability of LLMs. It primarily focuses on CTF-style (Capture the Flag) challenges that require knowledge from the fields of mathematics, cryptography, cybersecurity, and computer science. The challenge generators were manually developed by industry experts and tested by multiple individuals to find errors and edge cases. The answers often consist of many characters and big numbers, making correct guessing highly unlikely. This repository contains the generated question and answer pairs that can be fed to AI models to assess the outputs. The repository contains various generated instances of one test to increase the accuracy of the measurements.
 
 
 
 
 
24
 
25
 
26
  - **Curated by:** Norbert Tihanyi, Tamas Bisztray, Richard A. Dubniczky, Rebeka Toth, Bertalan Borsos, Bilel Cherif, Ridhi Jain, Lajos Muzsai, Mohamed Amine Ferrag, Ryan Marinelli, Lucas C. Cordeiro, Merouane Debbah, Vasileios Mavroeidis, and Audun Josang
27
  <!-- - **Funded by:** [More Information Needed] -->
28
+ - **Language:** English
 
29
  - **License:** AL 2.0
30
 
31
  ### Dataset Sources
 
38
  ## Uses
39
 
40
  <!-- Address questions around how the dataset is intended to be used. -->
41
+ - Testing the problem-solving ability of an LLM.
42
+ - Comparing the problem-solving abilities of multiple LLMs.
43
+ - Testing the response consistency of an LLM.
44
+ - Testing the confidence of an LLM.
45
 
46
 
47
  ## Dataset Structure
48
 
49
  <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
50
 
51
+ During generation, different sizes of the dataset can be created by modifying the `k` parameter, where `k=5` means that one test generator will be used to generate `5` instances of that test type. This repository contains four variations of the pre-generated tests datiled in the following table:
52
+
53
+ |File|K|Tests|
54
+ |-|-|-|
55
+ |[DIA-bench-2.json](./DIA-bench-2.json)|2|300|
56
+ |[DIA-bench-5.json](./DIA-bench-5.json)|5|750|
57
+ |[DIA-bench-10.json](./DIA-bench-10.json)|10|1,500|
58
+ |[DIA-bench-100.json](./DIA-bench-100.json)|100|15,000|
59
+
60
+ Please note that depending on the tool or API used to run the tests, the runtime as well as the cost will differ significantly.
61
+
62
+ Each generated dataset is a JSON file, where the questions are stored in a list under the `"questions"` key. An example question looks as follows:
63
+
64
+ ```json
65
+ {
66
+ "challenge": {
67
+ "Q template": 1,
68
+ "instance": 1,
69
+ "level": "easy",
70
+ "category": "Mathematics",
71
+ "adversarial": 0,
72
+ "description": "You have been given the task of solving the following challenge.\n\n #Instructions:\n a. Carefully read the challenge description.\n b. Determine the most effective approach to solve the challenge. This may include writing a program, utilizing external resources, employing specific tools, or any other suitable methods.\n c. We are solely interested in the solution, not the method used to obtain it.\n d. Confidence in your solution is extremely important. If your answer is correct, you will get +1 point. If incorrect, you will get -2 point. If you are unsure of the solution, or you can not solve it respond in this XML format: <xml>I-DO-NOT-KNOW</xml> to receive 0 points without penalty for an incorrect answer.\n e. Be sure to double-check your solution before answering and make sure your answer follows the required XML format as specified in the challenge.\n\n\n #Challenge:\n\n Given a 128-bit even number, namely 170272422572648913815378456437965460739, find the next prime number.",
73
+ "instructions": "Find the next prime number that comes after the specified number. Do NOT include any explanation or additional text in the response. ALWAYS return the answer in this XML format: '<xml>prime_number</xml>'."
74
+ },
75
+ "solution": {
76
+ "challenge_solution": "<xml>170272422572648913815378456437965460841</xml>",
77
+ "solution_explanation": "Finding the next prime number after a 128-bit even number can be efficiently accomplished using specialized algorithms. Many programming languages provide built-in functions for this purpose, which can return results almost instantly on a standard computer."
78
+ }
79
+ },
80
+ ```
81
+
82
+ ## Fields
83
+
84
+ Each question instance contains all of the following fields:
85
+
86
+ - `challenge.Q template` __int__: ID of the generator template.
87
+ - `challenge.instance` __int__: Number of the generated instance from the same template.
88
+ - `challenge.level` __str__: Percieved difficulty of the question (easy/medium/hard).
89
+ - `challenge.category` __str__: Area of expertise required to solve the question.
90
+ - `challenge.adversarial` __int__: Binary 0/1 to signify whether the question contains adversarial tactics to confuse the model.
91
+ - `challenge.description` __str__: The description of the challenge facts that is the first input for the model.
92
+ - `challenge.instructions` __str__: The instruction on solving the problem and the expected output, that is the second input for the model.
93
+ - `solution.challenge_solution`: __str__: The expected textual output that has to be generated by the model.
94
+ - `solution.solution_explanation`: __str__: Explanation written by the creator of the template about the challenge.
95
+
96
+ During testing, it's advised to send the concatenation of `challenge.description`, and `challenge.instructions` to the model and check if the output contains `solution.challenge_solution`. Because of the complexity and specificity of the expected outputs, it's unlinekly that a model would generate a correct solution by chance.
97
 
98
  ## Dataset Creation
99
 
 
101
 
102
  <!-- Motivation for the creation of this dataset. -->
103
 
104
+ Benchmarks typically rely on static question-answer pairs that the models might memorize or guess. To address these limitations, we introduced Dynamic Intelligence Assessment (DIA), a novel methodology for testing AI models using dynamic question templates and improved metrics across multiple disciplines such as mathematics, cryptography, cybersecurity, and computer science. The accompanying dataset, DIA-Bench, contains a diverse collection of challenge templates with mutable parameters presented in various formats, including text, PDFs, compiled binaries, visual puzzles, and CTF-style cybersecurity challenges. Our framework introduces four new metrics to assess a model’s reliability and confidence across multiple attempts.
105
 
106
  ### Source Data
107
 
108
  <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
109
+ All of the data used in the testing was created by industry experts and cross-validated with peers, as well as generated using various python libraries.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
110
 
111
  [More Information Needed]
112
 
 
114
 
115
  <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
116
 
117
+ All addresses, names, emails and other details within the dataset are randomly generated and combined from a pre-defined list and thus do not constitute personally identifiable information. All included data serve as examples for the models and are not relevant by themselves.
118
+
119
  [More Information Needed]
120
 
121
  ## Bias, Risks, and Limitations
122
 
123
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
124
 
125
+ This dataset is uniformly biased towards mathematics, computer science, cybersecurity and cryptography topics. All of these are tested using randomly generated tests, with various levels of complexity. We are not testing other capabilities of the models or areas of science such as general knowledge, or biology.
 
 
 
 
 
 
126
 
127
  ## Citation [optional]
128
 
 
130
 
131
  **BibTeX:**
132
 
133
+ ```bibtex
134
+ @misc{tihanyi2024diabenchmarking,
135
+ title={Dynamic Intelligence Assessment: Benchmarking LLMs on the Road to AGI with a Focus on Model Confidence},
136
+ author={Norbert Tihanyi and Tamas Bisztray and Richard A. Dubniczky and Rebeka Toth and Bertalan Borsos and Bilel Cherif and Mohamed Amine Ferrag and Lajos Muzsai and Ridhi Jain and Ryan Marinelli and Lucas C. Cordeiro and Merouane Debbah},
137
+ year={2024},
138
+ eprint={2410.15490},
139
+ archivePrefix={arXiv},
140
+ primaryClass={cs.AI},
141
+ url={https://arxiv.org/abs/2410.15490},
142
+ }
143
+ ```