Datasets:

Modalities:
Tabular
Text
Formats:
csv
Size:
< 1K
Libraries:
Datasets
pandas
License:
AA-LCR / README.md
mhillsmith's picture
Update README.md
8640ae3 verified
metadata
license: apache-2.0

Artificial Analysis Long Context Reasoning (AA-LCR) Dataset

AA-LCR includes 100 hard text-based questions that require reasoning across multiple real-world documents, with each document set averaging ~100k input tokens. Questions are designed such that answers cannot be directly retrieved from documents and must instead be reasoned from multiple information sources.

Dataset Development

AA-LCR was created through a rigorous multi-phase process involving several members of the Artificial Analysis research team and more than a dozen undergraduate students who were engaged on a short-term contract basis to write and/or validate questions.

Document Curation: We selected diverse document sets (company reports, government consultations, legal documents, academic papers) averaging ~100,000 tokens each, representing real materials knowledge workers analyze.

Question Creation: Undergraduate students from various disciplines developed questions with access via a dataset development dashboard to non-frontier test models to validate question difficulty (GPT-4o-mini, Llama-3.1-70B, Gemini 1.5 Flash). These models were specifically chosen to give creators a sense of AI capabilities without access to frontier models, preventing adversarial selection against particular frontier models. Creators were instructed to develop practical questions requiring multi-document reasoning, and to ensure that the questions were sufficiently hard for the above models to fail to get them right.

Human Validation: Every question was verified through human testing:

  • Evaluators answered questions using the same document sets provided to AI models
  • Human performance revealed the benchmark's challenging nature - individual evaluators achieved modest accuracy rates, typically answering 40-60% of questions correctly on the first attempt
  • However, when presented with correct answers, evaluators showed high agreement confirming question validity and demonstrating that while difficult, the questions had clear, defensible answers
  • Questions failing verification were revised or discarded
  • Every question in AA-LCR was answered correctly by at least one human tester, ensuring all questions have verified solutions

This approach validates that AA-LCR tests genuine reasoning capabilities rather than obscure knowledge, while acknowledging the inherent difficulty of long context reasoning tasks even for human experts.

Technical Details

AA-LCR comprises 100 questions across 7 types of text-only documents (i.e. Company Reports, Industry Reports, Government Consultations, Academia, Legal, Marketing Materials and Survey Reports). Multiple independent documents, forming a Document Set with a total length of ~100k tokens are passed as context for each question. For instance, the Company Documents topic includes separate document sets containing 2023 and 2024 company reports, respectively.

Each question requires using the Document Set and applying general and mathematical reasoning.

Parent Category Total Questions Total Document Sets Total Documents Total Tokens Average Token Per Document Set
Company Documents 63 16 92 1,476,239 92,265
Industry Reports 8 4 18 410,698 102,675
Government Consultations 11 3 60 325,254 108,418
Academia 5 2 14 223,776 111,888
Legal 6 2 23 233,050 116,525
Marketing 6 2 16 217,694 108,847
Survey Reports 1 1 11 93,046 93,046
Full Dataset 100 30 234 2,979,757 99,325

Sample Question:

```json For the company and quarter where the company reported a 13.5% decline on the prior quarters operating income. What was their adjusted EBITDA? List the company name and adjusted EBITDA

Answer: Equinix, $901 million ```

Examples of other types of questions include:

  • Financial Analysis and Comparative Metrics: Extract financial data and calculate performance metrics
  • Legal and Regulatory Interpretation: Identify cases/policies under exclusion rules, interpret outcomes and applicability and surface cited sections/definitions
  • Multi-Document Information Synthesis: Find and connect information scattered across multiple documents to identify themes and correlate data points
  • Temporal and Conditional Logic Analysis: Track time-series trends, implement conditional decision rules, and determine threshold-based alerts or actions
  • Research and Classification: Analyze patterns, classify and identify relevant documents to recall specific information

Scoring Approach

We use an LLM-based equality checker to evaluate responses:

``` Assess whether the following CANDIDATE ANSWER is CORRECT or INCORRECT. For the CANDIDATE ANSWER to be correct, it must be consistent with the OFFICIAL ANSWER.

The question, for reference only: {question} The OFFICIAL ANSWER: {official_answer} CANDIDATE ANSWER TO ASSESS: {candidate_answer}

Reply only with CORRECT or INCORRECT.

```

Qwen3 235B A22B 2507 Non-reasoning is used as the equality checker model.

Access and Citation

The AA-LCR dataset is available at https://huggingface.co/datasets/ArtificialAnalysis/AA-LCR.

If you use AA-LCR in your research, please cite:

```json @dataset{artificialanalysis2025lcr, title={Artificial Analysis Long Context Reasoning Benchmark(LCR)}, author={Artificial Analysis Team}, year={2025}, publisher={Artificial Analysis, Inc.} } ```