license: mit
language:
- en
size_categories:
- 1K<n<10K
FOReCAst Dataset
FOReCAst (Future Outcome Reasoning and Confidence Assessment) is a benchmark dataset for evaluating language models on reasoning about uncertain future events and expressing calibrated confidence in their predictions. It is designed to support research in probabilistic language understanding, plausibility estimation, and long-term forecasting with natural language inputs.
This is the first release of the FOReCAst dataset. It includes natural language forecasting questions, structured outcomes, and calibrated confidence signals, currently derived from publicly accessible data on Metaculus. Future versions will incorporate additional data sources to expand topic diversity and challenge coverage.
Overview
FOReCAst is a benchmark dataset developed to evaluate language models on their ability to reason about uncertain future events and express calibrated confidence in natural language. While most existing NLP datasets are centered around static facts, closed-world assumptions, or deterministic labels, FOReCAst focuses on probabilistic reasoning under real-world temporal uncertainty.
Motivation
Forecasting future outcomes is inherently probabilistic and context-dependent, yet language models are rarely evaluated on such tasks. FOReCAst addresses this gap by offering structured, temporally grounded questions where the correct answer was initially unknown and only became available after real-world resolution. This enables the study of predictive reasoning, plausibility estimation, and model calibration in a realistic setting.
Dataset Design and Scope
This first version of FOReCAst comprises natural language forecasting questions, resolved outcomes, and normalized confidence scores, sourced from public questions on the Metaculus platform. Each entry has been cleaned, reformatted, and remapped to one of three reasoning types:
- Boolean question: binary yes/no forecasts
- Quantity estimation: numeric value predictions
- Timeframe prediction: resolution dates or deadlines
The dataset supports both prompt-based evaluation and supervised training, and is organized into standard train
, dev
, and test
splits to facilitate benchmarking and reproducibility. Confidence scores are normalized to the [0, 1]
range to enable studies of calibration and uncertainty expression.
Evaluation Focus
FOReCAst is intended to support a range of research directions, including:
- Forecasting and plausibility reasoning in open-ended questions
- Prediction calibration and uncertainty quantification
- Temporal reasoning and evolving world knowledge modeling
- Comparison of LLM forecasts with human forecasting baselines
Although the initial data source is Metaculus, FOReCAst is an independent benchmark designed to generalize across platforms and domains. Future versions will expand coverage to additional question sources (e.g., science, finance, geopolitics), incorporate new formats (e.g., multi-hop or multi-turn forecasting), and support temporal dynamics over longer horizons.
Intended Use
FOReCAst is released for non-commercial, academic research purposes only. It is designed to evaluate and improve the reasoning and calibration abilities of language models—not to serve as a forecasting system itself. This dataset is not intended for real-time prediction, operational use, or deployment in high-stakes decision-making systems. Potential users should carefully assess model outputs and ensure that limitations and uncertainties are clearly communicated.
Human Baseline Alignment
The structure of FOReCAst allows for comparison between model-generated forecasts and human forecaster behavior. For instance, questions with existing consensus predictions from Metaculus can serve as reference points when analyzing alignment, divergence, or calibration mismatch between models and expert or crowd forecasters.
Forward Vision
This is version 1 of an evolving benchmark. FOReCAst will be extended with:
- Broader domain coverage through additional public data sources
- More diverse reasoning tasks and question structures
- Context-aware forecasting via temporal update chains
- Evaluation protocols for dynamic prediction and revision over time
The long-term goal is to create a comprehensive, extensible benchmark for evaluating the reasoning, uncertainty modeling, and forecasting capabilities of modern language models.
Data Format
Each line in the dataset is a JSON object with the following fields:
- id: A unique anonymized identifier for the question.
- question: A natural language forecasting question.
- type: One of the following task types:
"Boolean question"
— yes/no prediction"quantity estimation"
— numeric value prediction"timeframe prediction"
— date-based or deadline prediction
- resolution: The final resolved outcome, in a format consistent with the question type.
- resolution_time: The date the question was resolved (
YYYY-MM-DD
). - created_time: The date the question was posted (
YYYY-MM-DD
). - confidence: A floating-point number between 0 and 1 indicating forecast confidence.
All fields have been normalized to support uniform modeling and evaluation.
Citation
If you use this dataset in your work, please cite the following paper:
@article{yuan2025future,
title={FOReCAst: The Future Outcome Reasoning and Confidence Assessment Benchmark},
author={Yuan, Zhangdie and Ding, Zifeng and Vlachos, Andreas},
journal={arXiv preprint arXiv:2502.19676},
year={2025}
}
Ethical Considerations and Limitations
We provide this dataset with careful attention to responsible use, but we acknowledge several potential areas of ethical concern:
Representational Bias
Questions are crowd-sourced from a forecasting community. As such, they may reflect regional, cultural, or political biases in topic selection, phrasing, and underlying assumptions. Researchers should consider this when analyzing or training on the data.
Annotation Uncertainty
Even resolved questions may contain some ambiguity in how outcomes are defined or interpreted. The source platform applies community-based resolution criteria, which may differ from formal or scientific definitions.
Model Misuse Risk
Although this dataset is designed for research, models trained on it could be misused to create seemingly confident forecasts about complex real-world events without context or accountability. We discourage the use of any resulting models for high-stakes decision-making without appropriate calibration and interpretability safeguards.
Dual Use and Overconfidence
Forecasting models carry inherent dual-use potential. A well-calibrated system can aid in education or policy research, but a poorly understood or overly confident model may lead to misinformation or undue influence. Users must critically assess outputs and communicate uncertainty transparently.
Privacy and Sensitivity
The dataset does not contain personally identifiable information. All questions and resolutions are publicly posted. Nonetheless, some topics may refer to politically sensitive or socially charged events. Users should take care to avoid amplifying harm when analyzing or presenting outputs.
Acknowledgment: This dataset includes publicly available data adapted from Metaculus, used for academic research purposes in accordance with their Terms of Use. We do not claim ownership of the original content.
Data Governance and Retraction
As the original data comes from a live forecasting platform, changes (e.g., question edits or deletions) may occur after data collection. We encourage users to verify against the current Metaculus platform if needed and to respect any requests for data takedown or correction.
For questions, feedback, or collaboration inquiries, please contact the authors.