--- license: cdla-permissive-2.0 task_categories: - text-generation - text2text-generation - other tags: - code - fstar - popai pretty_name: PoPAI-FStarDataSet-V2 size_categories: - 10K, "name": , "original_source_type": , "source_type": , "source_definition": , "source": , "source_range": , "file_context": , "dependencies": , "opens_and_abbrevs": , "vconfig": , "interleaved": , "verbose_type": , "effect": , "effect_flags": , "mutual_with": , "ideal_premises": , "proof_features": , "is_simple_lemma": , "is_div": , "is_proof": , "is_simply_typed": , "is_type": , "partial_definition": , "completed_definiton": , "isa_cross_project_example": } ``` # Usage To use this dataset with [`datasets`](https://pypi.org/project/datasets/), ```python from datasets import load_dataset data = load_dataset("microsoft/FStarDataSet-V2") train_data = data["train"] eval_data = data["validation"] test_data = data["test"] intra_project_test = test_data.filter(lambda x: x["isa_cross_project_example"] == False) cross_project_test = test_data.filter(lambda x: x["isa_cross_project_example"] == True) ``` ## Input The primary input for generating F* definition is **`source_type`**. All other information in an example may be used directly or to derive an input except **`source_definition`**, **`ideal_premises`**, and **`completed_definiton`**. ## Output The primary output is **`source_definition`**, which is the ground truth definition, that can be evaluated with the [proof checker](#evaluation-on-this-dataset). The **`completed_definiton`** may be used as ground truth when a model is used as a text completion setting (though the evaluator does not support evaluation in this setting). In addition, **`ideal_premises`** may be used for evaluating premise selection models. # Evaluation on this dataset Generated F* definitions should be evaluated the proof checker tool from [https://github.com/FStarLang/fstar_dataset/releases/tag/eval-v2.0](https://github.com/FStarLang/fstar_dataset/releases/tag/eval-v2.0). Download the source code and the `helpers.zip` file from the release. ## Troubleshooting The attached binaries in the evaluator (i.e., `fstar.exe` and `z3`) are built on **`Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-189-generic x86_64)`**, **`gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2)`**, **`OCaml 4.12.0`**. If any of the binaries do not work properly, build F* from [this commit (10183ea187da8e8c426b799df6c825e24c0767d3)](https://github.com/FStarLang/FStar/commit/10183ea187da8e8c426b799df6c825e24c0767d3) from the [F* repository](https://github.com/FStarLang/FStar), using the [installation guide](https://github.com/FStarLang/FStar/blob/master/INSTALL.md). # Data Source In addition to the eight projects in `microsoft/FStarDataSet`, data from four more projects are included in this version. 1. [Starmada](https://github.com/microsoft/Armada): a framework for doing proofs by stepwise refinement for concurrent programs in a weak memory model. Starmada is an experimental version of Armada implemented in F⋆, relying on various advanced features of F⋆’s dependent type system for more generic and abstract proofs. 2. [Zeta](https://github.com/project-everest/zeta): a high performance, concurrent monitor for stateful services proven correct in F⋆ and its Steel concurrent separation logic 3. [Dice-star](https://github.com/verified-HRoT/dice-star): a verified implementation of the DICE measured boot protocol for embedded devices 4. [Noise-star](https://github.com/Inria-Prosecco/noise-star): a verified compiler for implementations of Noise protocols, a family of key-exchange protocols # Limitations **TDB** # Citation ``` @inproceedings{chakraborty2024towards, title={Towards Neural Synthesis for SMT-Assisted Proof-Oriented Programming}, author={Chakraborty, Saikat and Ebner, Gabriel and Bhat, Siddharth and Fakhoury, Sarah and Fatima, Sakina and Lahiri, Shuvendu and Swamy, Nikhil}, booktitle={Proceedings of the IEEE/ACM 47th International Conference on Software Engineering (To Appear)}, pages={1--12}, year={2025} } ```