|
# SciBench |
|
**SciBench** is a novel benchmark for college-level scientific problems consisting of _695_ |
|
problems sourced from instructional textbooks. The benchmark is designed to evaluate the complex reasoning capabilities, |
|
strong domain knowledge, and advanced calculation skills of LLMs. |
|
Please refer to our paper for full description: [SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models](https://arxiv.org/abs/2307.10635) |
|
|
|
We developed an innovative **evaluation protocol** for a detailed analysis of reasoning abilities. This |
|
involves instructing LLMs to self-identify and categorize their errors within a predefined set of |
|
capabilities. This process offers a fine-grained understanding of where the models are falling short. |
|
|
|
## Data |
|
Each file is list of dictionary and can be extracted using following scripts. |
|
Each file stands for one textbook, which is fully elaborated in the paper. |
|
|
|
``` |
|
subject='atkins' |
|
with open("./data/{}.json".format(subject), encoding='utf-8') as json_file: |
|
problems=json.load(json_file) |
|
|
|
``` |
|
|
|
--- |
|
license: mit |
|
--- |
|
|