diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000000000000000000000000000000000000..818d649bf21cdef29b21f885c8f770f9baa1714e --- /dev/null +++ b/.gitattributes @@ -0,0 +1,31 @@ +*.7z filter=lfs diff=lfs merge=lfs -text +*.arrow filter=lfs diff=lfs merge=lfs -text +*.bin filter=lfs diff=lfs merge=lfs -text +*.bz2 filter=lfs diff=lfs merge=lfs -text +*.ftz filter=lfs diff=lfs merge=lfs -text +*.gz filter=lfs diff=lfs merge=lfs -text +*.h5 filter=lfs diff=lfs merge=lfs -text +*.joblib filter=lfs diff=lfs merge=lfs -text +*.lfs.* filter=lfs diff=lfs merge=lfs -text +*.model filter=lfs diff=lfs merge=lfs -text +*.msgpack filter=lfs diff=lfs merge=lfs -text +*.npy filter=lfs diff=lfs merge=lfs -text +*.npz filter=lfs diff=lfs merge=lfs -text +*.onnx filter=lfs diff=lfs merge=lfs -text +*.ot filter=lfs diff=lfs merge=lfs -text +*.parquet filter=lfs diff=lfs merge=lfs -text +*.pickle filter=lfs diff=lfs merge=lfs -text +*.pkl filter=lfs diff=lfs merge=lfs -text +*.pb filter=lfs diff=lfs merge=lfs -text +*.pt filter=lfs diff=lfs merge=lfs -text +*.pth filter=lfs diff=lfs merge=lfs -text +*.rar filter=lfs diff=lfs merge=lfs -text +saved_model/**/* filter=lfs diff=lfs merge=lfs -text +*.tar.* filter=lfs diff=lfs merge=lfs -text +*.tflite filter=lfs diff=lfs merge=lfs -text +*.tgz filter=lfs diff=lfs merge=lfs -text +*.wasm filter=lfs diff=lfs merge=lfs -text +*.xz filter=lfs diff=lfs merge=lfs -text +*.zip filter=lfs diff=lfs merge=lfs -text +*.zst filter=lfs diff=lfs merge=lfs -text +*tfevents* filter=lfs diff=lfs merge=lfs -text diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000000000000000000000000000000000..d7620b2313b4ef33e027868fc04383e5a7b9f414 --- /dev/null +++ b/.gitignore @@ -0,0 +1,14 @@ +__pycache__/* +*/__pycache__/* +**/__pycache__/* +scene_data_*.npy +scripts/logs/* +logs/* +wandb/* +data/* + +*~* +*#* +*sweep_logs* +*.ipynb_checkpoints* +*.egg-info* \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000000000000000000000000000000000000..9ab2a2bc86e7254487798f896c9f45899f3ea656 --- /dev/null +++ b/README.md @@ -0,0 +1,93 @@ +--- +title: "RAP: Risk-Aware Prediction" +emoji: 🚙 +colorFrom: red +colorTo: grey +sdk: gradio +sdk_version: 3.7 +app_file: app.py +pinned: false +language: + - Python +thumbnail: "url to a thumbnail used in social sharing" +tags: +- Risk Measures +- Forecasting +- Safety +- Human-Robot Interaction +license: cc-by-nc-4.0 + +--- + +# License statement + +The code is provided under a Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Under the license, the code is provided royalty free for non-commercial purposes only. The code may be covered by patents and if you want to use the code for commercial purposes, please contact us for a different license. + +# RAP: Risk-Aware Prediction + +This is the official code for [RAP: Risk-Aware Prediction for Robust Planning](https://arxiv.org/abs/2210.01368). You can test the results in [our huggingface demo](https://huggingface.co/spaces/TRI-ML/risk_biased_prediction) and see some additional experiments on the [paper website](https://sites.google.com/view/corl-risk/). + +![A planner reacts to low-probability events if they are dangerous, biasing the predictions to better represent these events helps the planner to be cautious.](image/illustration.png) + +We define and train a trajectory forecasting model and bias its prediction towards risk such that it helps a planner to estimate risk by producing the relevant pessimistic trajectory forecasts to consider. + +## Datasets +This repository uses two datasets: + - A didactic simulated environement with a single vehicle at constant velocity and a single pedestrian. + Two pedestrian behavior are implemented: fast and slow. At each step, pedestrians might walk at their favored speed or at the other speed. + This produces a distribution of pedestrian trajectories with two modes. The dataset is automatically generated and used. You can change the parameters of the data generation in "config/learning_config.py" + - The Waymo Open Motion Dataset (WOMD) with complex real scenes. + + +## Forecasting model +A conditional variational auto-encoder (CVAE) model is used as the base pedestrian trajectory predictor. Its latent space is quantized or gaussian depending on the parameter that you set in the config. It uses either multi-head attention or a modified version of context gating to account for interactions. Depending on the parameters, the trajectory encoder and decoder can be set to MLP, LSTM, or maskedLSTM. + +# Usage + +## Installation + +- (Set up a virtual environment with python>3.7) +- Install the packge with `pip -e install .` + +## Setting up the data + +### Didactic simulation + - The dataset is automatically generated and used. You can change the parameters of the data generation in "config/learning_config.py" + +### WOMD + - [Download the Waymo Open Motion Dataset (WOMD)](https://waymo.com/open/) + - Pre-process it as follows: + - Sample set: `python scripts/scripts_utils/generate_dataset_waymo.py /scenario/validation /interactive_veh_type/sample --num_parallel=<16> --debug_size=<1000>` + - Training set: `python scripts/interaction_utils/generate_dataset_waymo.py /scenario/training /interactive_veh_type/training --num_parallel=<16>` + - Validation set: `python scripts/interaction_utils/generate_dataset_waymo.py /scenario/validation_interactive /interactive_veh_type/validation --num_parallel=<16>` + + Replace the arguments: + - `` with the path where you downloaded WOMD + - `<16>` with the number of cores you want to use + - `<1000>` with the number of scene to process for the sample set (some scenes are filtered out so the resulting number of pre-processed scenes might be about the third of the input number) + - Set up the path to the dataset in "risk_biased/config/paths.py" + +## Configuration and training + +- Set up the output log path in "risk_biased/config/paths.py" +- You might need to login to wandb with `wandb login