diff --git a/elia/LICENSE b/LICENSE
similarity index 100%
rename from elia/LICENSE
rename to LICENSE
diff --git a/README.md b/README.md
index 8511de7bfbbc4a731c11381bbf702111eea7d132..869de8afb5569dca5c730028fe17228f54198bef 100644
--- a/README.md
+++ b/README.md
@@ -1,12 +1,222 @@
----
-title: Elia
-emoji: 📈
-colorFrom: green
-colorTo: red
-sdk: gradio
-sdk_version: 3.33.1
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
+# LAVT: Language-Aware Vision Transformer for Referring Image Segmentation
+Welcome to the official repository for the method presented in
+"LAVT: Language-Aware Vision Transformer for Referring Image Segmentation."
+
+
+![Pipeline Image](pipeline.jpg)
+
+Code in this repository is written using [PyTorch](https://pytorch.org/) and is organized in the following way (assuming the working directory is the root directory of this repository):
+* `./lib` contains files implementing the main network.
+* Inside `./lib`, `_utils.py` defines the highest-level model, which incorporates the backbone network
+defined in `backbone.py` and the simple mask decoder defined in `mask_predictor.py`.
+`segmentation.py` provides the model interface and initialization functions.
+* `./bert` contains files migrated from [Hugging Face Transformers v3.0.2](https://huggingface.co/transformers/v3.0.2/quicktour.html),
+which implement the BERT language model.
+We used Transformers v3.0.2 during development but it had a bug that would appear when using `DistributedDataParallel`.
+Therefore we maintain a copy of the relevant source files in this repository.
+This way, the bug is fixed and code in this repository is self-contained.
+* `./train.py` is invoked to train the model.
+* `./test.py` is invoked to run inference on the evaluation subsets after training.
+* `./refer` contains data pre-processing code and is also where data should be placed, including the images and all annotations.
+It is cloned from [refer](https://github.com/lichengunc/refer).
+* `./data/dataset_refer_bert.py` is where the dataset class is defined.
+* `./utils.py` defines functions that track training statistics and setup
+functions for `DistributedDataParallel`.
+
+
+## Updates
+**June 21st, 2022**. Uploaded the training logs and trained
+model weights of lavt_one.
+
+**June 9th, 2022**.
+Added a more efficient implementation of LAVT.
+* To train this new model, specify `--model` as `lavt_one`
+(and `lavt` is still valid for specifying the old model).
+The rest of the configuration stays unchanged.
+* The difference between this version and the previous one
+is that the language model has been moved inside the overall model,
+so that `DistributedDataParallel` needs to be applied only once.
+Applying it twice (on the standalone language model and the main branch)
+as done in the old implementation led to low GPU utility,
+which prevented scaling up training speed with more GPUs.
+We recommend training this model on 8 GPUs
+(and same as before with batch size 32).
+
+## Setting Up
+### Preliminaries
+The code has been verified to work with PyTorch v1.7.1 and Python 3.7.
+1. Clone this repository.
+2. Change directory to root of this repository.
+### Package Dependencies
+1. Create a new Conda environment with Python 3.7 then activate it:
+```shell
+conda create -n lavt python==3.7
+conda activate lavt
+```
+
+2. Install PyTorch v1.7.1 with a CUDA version that works on your cluster/machine (CUDA 10.2 is used in this example):
+```shell
+conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.2 -c pytorch
+```
+
+3. Install the packages in `requirements.txt` via `pip`:
+```shell
+pip install -r requirements.txt
+```
+
+### Datasets
+1. Follow instructions in the `./refer` directory to set up subdirectories
+and download annotations.
+This directory is a git clone (minus two data files that we do not need)
+from the [refer](https://github.com/lichengunc/refer) public API.
+
+2. Download images from [COCO](https://cocodataset.org/#download).
+Please use the first downloading link *2014 Train images [83K/13GB]*, and extract
+the downloaded `train_2014.zip` file to `./refer/data/images/mscoco/images`.
+
+### The Initialization Weights for Training
+1. Create the `./pretrained_weights` directory where we will be storing the weights.
+```shell
+mkdir ./pretrained_weights
+```
+2. Download [pre-trained classification weights of
+the Swin Transformer](https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384_22k.pth),
+and put the `pth` file in `./pretrained_weights`.
+These weights are needed for training to initialize the model.
+
+### Trained Weights of LAVT for Testing
+1. Create the `./checkpoints` directory where we will be storing the weights.
+```shell
+mkdir ./checkpoints
+```
+2. Download LAVT model weights (which are stored on Google Drive) using links below and put them in `./checkpoints`.
+
+| [RefCOCO](https://drive.google.com/file/d/13D-OeEOijV8KTC3BkFP-gOJymc6DLwVT/view?usp=sharing) | [RefCOCO+](https://drive.google.com/file/d/1B8Q44ZWsc8Pva2xD_M-KFh7-LgzeH2-2/view?usp=sharing) | [G-Ref (UMD)](https://drive.google.com/file/d/1BjUnPVpALurkGl7RXXvQiAHhA-gQYKvK/view?usp=sharing) | [G-Ref (Google)](https://drive.google.com/file/d/1weiw5UjbPfo3tCBPfB8tu6xFXCUG16yS/view?usp=sharing) |
+|---|---|---|---|
+
+3. Model weights and training logs of the new lavt_one implementation are below.
+
+| RefCOCO | RefCOCO+ | G-Ref (UMD) | G-Ref (Google) |
+|:-----:|:-----:|:-----:|:-----:|
+|[log](https://drive.google.com/file/d/1YIojIHqe3bxxsWOltifa2U9jH67hPHLM/view?usp=sharing) | [weights](https://drive.google.com/file/d/1xFMEXr6AGU97Ypj1yr8oo00uObbeIQvJ/view?usp=sharing)|[log](https://drive.google.com/file/d/1Z34T4gEnWlvcSUQya7txOuM0zdLK7MRT/view?usp=sharing) | [weights](https://drive.google.com/file/d/1HS8ZnGaiPJr-OmoUn4-4LVnVtD_zHY6w/view?usp=sharing)|[log](https://drive.google.com/file/d/14VAgahngOV8NA6noLZCqDoqaUrlW14v8/view?usp=sharing) | [weights](https://drive.google.com/file/d/14g8NzgZn6HzC6tP_bsQuWmh5LnOcovsE/view?usp=sharing)|[log](https://drive.google.com/file/d/1JBXfmlwemWSvs92Rky0TlHcVuuLpt4Da/view?usp=sharing) | [weights](https://drive.google.com/file/d/1IJeahFVLgKxu_BVmWacZs3oUzgTCeWcz/view?usp=sharing)|
+
+* The Prec@K, overall IoU and mean IoU numbers in the training logs will differ
+from the final results obtained by running `test.py`,
+because only one out of multiple annotated expressions is
+randomly selected and evaluated for each object during training.
+But these numbers give a good idea about the test performance.
+The two should be fairly close.
+
+
+## Training
+We use `DistributedDataParallel` from PyTorch.
+The released `lavt` weights were trained using 4 x 32G V100 cards (max mem on each card was about 26G).
+The released `lavt_one` weights were trained using 8 x 32G V100 cards (max mem on each card was about 13G).
+Using more cards was to accelerate training.
+To run on 4 GPUs (with IDs 0, 1, 2, and 3) on a single node:
+```shell
+mkdir ./models
+
+mkdir ./models/refcoco
+CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcoco --model_id refcoco --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/refcoco/output
+
+mkdir ./models/refcoco+
+CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcoco+ --model_id refcoco+ --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/refcoco+/output
+
+mkdir ./models/gref_umd
+CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcocog --splitBy umd --model_id gref_umd --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/gref_umd/output
+
+mkdir ./models/gref_google
+CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcocog --splitBy google --model_id gref_google --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/gref_google/output
+```
+* *--model* is a pre-defined model name. Options include `lavt` and `lavt_one`. See [Updates](#updates).
+* *--dataset* is the dataset name. One can choose from `refcoco`, `refcoco+`, and `refcocog`.
+* *--splitBy* needs to be specified if and only if the dataset is G-Ref (which is also called RefCOCOg).
+`umd` identifies the UMD partition and `google` identifies the Google partition.
+* *--model_id* is the model name one should define oneself (*e.g.*, customize it to contain training/model configurations, dataset information, experiment IDs, *etc*.).
+It is used in two ways: Training log will be saved as `./models/[args.model_id]/output` and the best checkpoint will be saved as `./checkpoints/model_best_[args.model_id].pth`.
+* *--swin_type* specifies the version of the Swin Transformer.
+One can choose from `tiny`, `small`, `base`, and `large`. The default is `base`.
+* *--pretrained_swin_weights* specifies the path to pre-trained Swin Transformer weights used for model initialization.
+* Note that currently we need to manually create the `./models/[args.model_id]` directory via `mkdir` before running `train.py`.
+This is because we use `tee` to redirect `stdout` and `stderr` to `./models/[args.model_id]/output` for logging.
+This is a nuisance and should be resolved in the future, *i.e.*, using a proper logger or a bash script for initiating training.
+
+## Testing
+For RefCOCO/RefCOCO+, run one of
+```shell
+python test.py --model lavt --swin_type base --dataset refcoco --split val --resume ./checkpoints/refcoco.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
+python test.py --model lavt --swin_type base --dataset refcoco+ --split val --resume ./checkpoints/refcoco+.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
+```
+* *--split* is the subset to evaluate, and one can choose from `val`, `testA`, and `testB`.
+* *--resume* is the path to the weights of a trained model.
+
+For G-Ref (UMD)/G-Ref (Google), run one of
+```shell
+python test.py --model lavt --swin_type base --dataset refcocog --splitBy umd --split val --resume ./checkpoints/gref_umd.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
+python test.py --model lavt --swin_type base --dataset refcocog --splitBy google --split val --resume ./checkpoints/gref_google.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
+```
+* *--splitBy* specifies the partition to evaluate.
+One can choose from `umd` or `google`.
+* *--split* is the subset (according to the specified partition) to evaluate, and one can choose from `val` and `test` for the UMD partition, and only `val` for the Google partition..
+* *--resume* is the path to the weights of a trained model.
+
+## Results
+The complete test results of the released LAVT models are summarized as follows:
+
+| Dataset | P@0.5 | P@0.6 | P@0.7 | P@0.8 | P@0.9 | Overall IoU | Mean IoU |
+|:---------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----------:|:--------:|
+| RefCOCO val | 84.46 | 80.90 | 75.28 | 64.71 | 34.30 | 72.73 | 74.46 |
+| RefCOCO test A | 88.07 | 85.17 | 79.90 | 68.52 | 35.69 | 75.82 | 76.89 |
+| RefCOCO test B | 79.12 | 74.94 | 69.17 | 59.37 | 34.45 | 68.79 | 70.94 |
+| RefCOCO+ val | 74.44 | 70.91 | 65.58 | 56.34 | 30.23 | 62.14 | 65.81 |
+| RefCOCO+ test A | 80.68 | 77.96 | 72.90 | 62.21 | 32.36 | 68.38 | 70.97 |
+| RefCOCO+ test B | 65.66 | 61.85 | 55.94 | 47.56 | 27.24 | 55.10 | 59.23 |
+| G-Ref val (UMD) | 70.81 | 65.28 | 58.60 | 47.49 | 22.73 | 61.24 | 63.34 |
+| G-Ref test (UMD)| 71.54 | 66.38 | 59.00 | 48.21 | 23.10 | 62.09 | 63.62 |
+|G-Ref val (Goog.)| 71.16 | 67.21 | 61.76 | 51.98 | 27.30 | 60.50 | 63.66 |
+
+We have validated LAVT on RefCOCO with multiple runs.
+The overall IoU on the val set generally lies in the range of 72.73±0.5%.
+
+
+## Demo: Try LAVT on Your Own Image-text Pairs!
+One can run inference on a custom image-text pair
+and visualize the result by running the script `./demo_inference.py`.
+Choose your photos and expessions and have fun.
+
+
+## Citing LAVT
+```
+@inproceedings{yang2022lavt,
+ title={LAVT: Language-Aware Vision Transformer for Referring Image Segmentation},
+ author={Yang, Zhao and Wang, Jiaqi and Tang, Yansong and Chen, Kai and Zhao, Hengshuang and Torr, Philip HS},
+ booktitle={CVPR},
+ year={2022}
+}
+```
+
+
+## Contributing
+We appreciate all contributions.
+It helps the project if you could
+- report issues you are facing,
+- give a :+1: on issues reported by others that are relevant to you,
+- answer issues reported by others for which you have found solutions,
+- and implement helpful new features or improve the code otherwise with pull requests.
+
+## Acknowledgements
+Code in this repository is built upon several public repositories.
+Specifically,
+* data pre-processing leverages the [refer](https://github.com/lichengunc/refer) repository,
+* the backbone model is implemented based on code from [Swin Transformer for Semantic Segmentation](https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation),
+* the training and testing pipelines are adapted from [RefVOS](https://github.com/miriambellver/refvos),
+* and implementation of the BERT model (files in the bert directory) is from [Hugging Face Transformers v3.0.2](https://github.com/huggingface/transformers/tree/v3.0.2)
+(we migrated over the relevant code to fix a bug and simplify the installation process).
+
+Some of these repositories in turn adapt code from [OpenMMLab](https://github.com/open-mmlab) and [TorchVision](https://github.com/pytorch/vision).
+We'd like to thank the authors/organizations of these repositories for open sourcing their projects.
+
+
+## License
+GNU GPLv3
diff --git a/elia/__pycache__/args.cpython-37.pyc b/__pycache__/args.cpython-37.pyc
similarity index 100%
rename from elia/__pycache__/args.cpython-37.pyc
rename to __pycache__/args.cpython-37.pyc
diff --git a/elia/__pycache__/args.cpython-38.pyc b/__pycache__/args.cpython-38.pyc
similarity index 100%
rename from elia/__pycache__/args.cpython-38.pyc
rename to __pycache__/args.cpython-38.pyc
diff --git a/elia/__pycache__/transforms.cpython-37.pyc b/__pycache__/transforms.cpython-37.pyc
similarity index 100%
rename from elia/__pycache__/transforms.cpython-37.pyc
rename to __pycache__/transforms.cpython-37.pyc
diff --git a/elia/__pycache__/transforms.cpython-38.pyc b/__pycache__/transforms.cpython-38.pyc
similarity index 100%
rename from elia/__pycache__/transforms.cpython-38.pyc
rename to __pycache__/transforms.cpython-38.pyc
diff --git a/elia/__pycache__/utils.cpython-37.pyc b/__pycache__/utils.cpython-37.pyc
similarity index 100%
rename from elia/__pycache__/utils.cpython-37.pyc
rename to __pycache__/utils.cpython-37.pyc
diff --git a/elia/__pycache__/utils.cpython-38.pyc b/__pycache__/utils.cpython-38.pyc
similarity index 99%
rename from elia/__pycache__/utils.cpython-38.pyc
rename to __pycache__/utils.cpython-38.pyc
index 59c425f22032f4435c681238cd31a83bd8fb1061..4d9a65a30ce4b04c63946a7156066658f76ad96f 100644
Binary files a/elia/__pycache__/utils.cpython-38.pyc and b/__pycache__/utils.cpython-38.pyc differ
diff --git a/elia/app.py b/app.py
similarity index 98%
rename from elia/app.py
rename to app.py
index 03b11a42bb61ff9a4b2dc51fd6d8e4fe106de893..9e8a0eed302489afa274220157d47411d4ed5022 100644
--- a/elia/app.py
+++ b/app.py
@@ -2,7 +2,7 @@ import gradio as gr
image_path = './image001.png'
sentence = 'spoon on the dish'
-weights = './checkpoints/model_best_refcoco_0508.pth'
+weights = './checkpoints/gradio.pth'
device = 'cpu'
# pre-process the input image
@@ -185,7 +185,7 @@ model = WrapperModel(single_model.backbone, single_bert_model, maskformer_head)
checkpoint = torch.load(weights, map_location='cpu')
-model.load_state_dict(checkpoint['model'], strict=False)
+model.load_state_dict(checkpoint, strict=False)
model.to(device)
model.eval()
#single_bert_model.load_state_dict(checkpoint['bert_model'])
diff --git a/elia/args.py b/args.py
similarity index 100%
rename from elia/args.py
rename to args.py
diff --git a/elia/bert/__pycache__/activations.cpython-37.pyc b/bert/__pycache__/activations.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/activations.cpython-37.pyc
rename to bert/__pycache__/activations.cpython-37.pyc
diff --git a/elia/bert/__pycache__/activations.cpython-38.pyc b/bert/__pycache__/activations.cpython-38.pyc
similarity index 96%
rename from elia/bert/__pycache__/activations.cpython-38.pyc
rename to bert/__pycache__/activations.cpython-38.pyc
index 3d3804e94dab5c36f8cdda958065b4e32dc4aabd..95d7574b27f87ba4a183070e0b28ac76c34cd93e 100644
Binary files a/elia/bert/__pycache__/activations.cpython-38.pyc and b/bert/__pycache__/activations.cpython-38.pyc differ
diff --git a/elia/bert/__pycache__/configuration_bert.cpython-37.pyc b/bert/__pycache__/configuration_bert.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/configuration_bert.cpython-37.pyc
rename to bert/__pycache__/configuration_bert.cpython-37.pyc
diff --git a/elia/bert/__pycache__/configuration_bert.cpython-38.pyc b/bert/__pycache__/configuration_bert.cpython-38.pyc
similarity index 99%
rename from elia/bert/__pycache__/configuration_bert.cpython-38.pyc
rename to bert/__pycache__/configuration_bert.cpython-38.pyc
index 9eb473e317fe8fb5c667a3dbe69bace5c29e8986..473602c33e886e9796ad1bde3f63d1091177de1d 100644
Binary files a/elia/bert/__pycache__/configuration_bert.cpython-38.pyc and b/bert/__pycache__/configuration_bert.cpython-38.pyc differ
diff --git a/elia/bert/__pycache__/configuration_utils.cpython-37.pyc b/bert/__pycache__/configuration_utils.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/configuration_utils.cpython-37.pyc
rename to bert/__pycache__/configuration_utils.cpython-37.pyc
diff --git a/elia/bert/__pycache__/configuration_utils.cpython-38.pyc b/bert/__pycache__/configuration_utils.cpython-38.pyc
similarity index 94%
rename from elia/bert/__pycache__/configuration_utils.cpython-38.pyc
rename to bert/__pycache__/configuration_utils.cpython-38.pyc
index eefb7f8062db85c760031b0999814cf84c2f992b..babcf78a077ee51e54ffc97893dd5ab0bc3e2fb3 100644
Binary files a/elia/bert/__pycache__/configuration_utils.cpython-38.pyc and b/bert/__pycache__/configuration_utils.cpython-38.pyc differ
diff --git a/elia/bert/__pycache__/file_utils.cpython-37.pyc b/bert/__pycache__/file_utils.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/file_utils.cpython-37.pyc
rename to bert/__pycache__/file_utils.cpython-37.pyc
diff --git a/elia/bert/__pycache__/file_utils.cpython-38.pyc b/bert/__pycache__/file_utils.cpython-38.pyc
similarity index 74%
rename from elia/bert/__pycache__/file_utils.cpython-38.pyc
rename to bert/__pycache__/file_utils.cpython-38.pyc
index 29ede2127f22edebb5409a5bfcbec143227b7d01..941c87d2e2a9c7e584c0e2730c06328c59727806 100644
Binary files a/elia/bert/__pycache__/file_utils.cpython-38.pyc and b/bert/__pycache__/file_utils.cpython-38.pyc differ
diff --git a/elia/bert/__pycache__/generation_utils.cpython-37.pyc b/bert/__pycache__/generation_utils.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/generation_utils.cpython-37.pyc
rename to bert/__pycache__/generation_utils.cpython-37.pyc
diff --git a/elia/bert/__pycache__/generation_utils.cpython-38.pyc b/bert/__pycache__/generation_utils.cpython-38.pyc
similarity index 99%
rename from elia/bert/__pycache__/generation_utils.cpython-38.pyc
rename to bert/__pycache__/generation_utils.cpython-38.pyc
index c36afeb132be1ff8b5cca362f0911bf211e1b1e3..211707b3b2fbfe3f02d064184ab0ab5ab698b961 100644
Binary files a/elia/bert/__pycache__/generation_utils.cpython-38.pyc and b/bert/__pycache__/generation_utils.cpython-38.pyc differ
diff --git a/elia/bert/__pycache__/modeling_bert.cpython-37.pyc b/bert/__pycache__/modeling_bert.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/modeling_bert.cpython-37.pyc
rename to bert/__pycache__/modeling_bert.cpython-37.pyc
diff --git a/elia/bert/__pycache__/modeling_bert.cpython-38.pyc b/bert/__pycache__/modeling_bert.cpython-38.pyc
similarity index 75%
rename from elia/bert/__pycache__/modeling_bert.cpython-38.pyc
rename to bert/__pycache__/modeling_bert.cpython-38.pyc
index 3e4f837799032e0940315dd0506de477acb1acf1..c728a7f70ea9ad1d20bdfd85ed5ac6cfb874f90a 100644
Binary files a/elia/bert/__pycache__/modeling_bert.cpython-38.pyc and b/bert/__pycache__/modeling_bert.cpython-38.pyc differ
diff --git a/elia/bert/__pycache__/modeling_utils.cpython-37.pyc b/bert/__pycache__/modeling_utils.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/modeling_utils.cpython-37.pyc
rename to bert/__pycache__/modeling_utils.cpython-37.pyc
diff --git a/elia/bert/__pycache__/modeling_utils.cpython-38.pyc b/bert/__pycache__/modeling_utils.cpython-38.pyc
similarity index 83%
rename from elia/bert/__pycache__/modeling_utils.cpython-38.pyc
rename to bert/__pycache__/modeling_utils.cpython-38.pyc
index 52806021ea6d36cc4fad71c04b02165b2e14f4e7..d750c965c11520180b7fb4369d436f2fc98f459b 100644
Binary files a/elia/bert/__pycache__/modeling_utils.cpython-38.pyc and b/bert/__pycache__/modeling_utils.cpython-38.pyc differ
diff --git a/elia/bert/__pycache__/multimodal_bert.cpython-37.pyc b/bert/__pycache__/multimodal_bert.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/multimodal_bert.cpython-37.pyc
rename to bert/__pycache__/multimodal_bert.cpython-37.pyc
diff --git a/elia/bert/__pycache__/multimodal_bert.cpython-38.pyc b/bert/__pycache__/multimodal_bert.cpython-38.pyc
similarity index 98%
rename from elia/bert/__pycache__/multimodal_bert.cpython-38.pyc
rename to bert/__pycache__/multimodal_bert.cpython-38.pyc
index 0919175640605c3d91aa56da62a3a916bca08648..c36ada98c9cae9d0318b7b03e248cb42e60ee0e4 100644
Binary files a/elia/bert/__pycache__/multimodal_bert.cpython-38.pyc and b/bert/__pycache__/multimodal_bert.cpython-38.pyc differ
diff --git a/elia/bert/__pycache__/tokenization_bert.cpython-37.pyc b/bert/__pycache__/tokenization_bert.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/tokenization_bert.cpython-37.pyc
rename to bert/__pycache__/tokenization_bert.cpython-37.pyc
diff --git a/elia/bert/__pycache__/tokenization_bert.cpython-38.pyc b/bert/__pycache__/tokenization_bert.cpython-38.pyc
similarity index 99%
rename from elia/bert/__pycache__/tokenization_bert.cpython-38.pyc
rename to bert/__pycache__/tokenization_bert.cpython-38.pyc
index 46e86bdfb72fd99e3a12e8304d1a87da8874af31..e9c35a24fb91108ac59b0627df02952d75b38e55 100644
Binary files a/elia/bert/__pycache__/tokenization_bert.cpython-38.pyc and b/bert/__pycache__/tokenization_bert.cpython-38.pyc differ
diff --git a/elia/bert/__pycache__/tokenization_utils.cpython-37.pyc b/bert/__pycache__/tokenization_utils.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/tokenization_utils.cpython-37.pyc
rename to bert/__pycache__/tokenization_utils.cpython-37.pyc
diff --git a/elia/bert/__pycache__/tokenization_utils.cpython-38.pyc b/bert/__pycache__/tokenization_utils.cpython-38.pyc
similarity index 99%
rename from elia/bert/__pycache__/tokenization_utils.cpython-38.pyc
rename to bert/__pycache__/tokenization_utils.cpython-38.pyc
index d0c7b8e10c8c808b0d8466184f2107f6efa0261c..e14dd5fad83a5a010883c13379281d78e5075523 100644
Binary files a/elia/bert/__pycache__/tokenization_utils.cpython-38.pyc and b/bert/__pycache__/tokenization_utils.cpython-38.pyc differ
diff --git a/elia/bert/__pycache__/tokenization_utils_base.cpython-37.pyc b/bert/__pycache__/tokenization_utils_base.cpython-37.pyc
similarity index 100%
rename from elia/bert/__pycache__/tokenization_utils_base.cpython-37.pyc
rename to bert/__pycache__/tokenization_utils_base.cpython-37.pyc
diff --git a/elia/bert/__pycache__/tokenization_utils_base.cpython-38.pyc b/bert/__pycache__/tokenization_utils_base.cpython-38.pyc
similarity index 99%
rename from elia/bert/__pycache__/tokenization_utils_base.cpython-38.pyc
rename to bert/__pycache__/tokenization_utils_base.cpython-38.pyc
index c9d46a9416d56f3d4ed2ae4473434002b42eb68c..322088ddf84ea2465f20d9e423c6d973b27b504b 100644
Binary files a/elia/bert/__pycache__/tokenization_utils_base.cpython-38.pyc and b/bert/__pycache__/tokenization_utils_base.cpython-38.pyc differ
diff --git a/elia/bert/activations.py b/bert/activations.py
similarity index 100%
rename from elia/bert/activations.py
rename to bert/activations.py
diff --git a/elia/bert/configuration_bert.py b/bert/configuration_bert.py
similarity index 100%
rename from elia/bert/configuration_bert.py
rename to bert/configuration_bert.py
diff --git a/elia/bert/configuration_utils.py b/bert/configuration_utils.py
similarity index 100%
rename from elia/bert/configuration_utils.py
rename to bert/configuration_utils.py
diff --git a/elia/bert/file_utils.py b/bert/file_utils.py
similarity index 100%
rename from elia/bert/file_utils.py
rename to bert/file_utils.py
diff --git a/elia/bert/generation_utils.py b/bert/generation_utils.py
similarity index 100%
rename from elia/bert/generation_utils.py
rename to bert/generation_utils.py
diff --git a/elia/bert/modeling_bert.py b/bert/modeling_bert.py
similarity index 100%
rename from elia/bert/modeling_bert.py
rename to bert/modeling_bert.py
diff --git a/elia/bert/modeling_utils.py b/bert/modeling_utils.py
similarity index 100%
rename from elia/bert/modeling_utils.py
rename to bert/modeling_utils.py
diff --git a/elia/bert/multimodal_bert.py b/bert/multimodal_bert.py
similarity index 100%
rename from elia/bert/multimodal_bert.py
rename to bert/multimodal_bert.py
diff --git a/elia/bert/tokenization_bert.py b/bert/tokenization_bert.py
similarity index 100%
rename from elia/bert/tokenization_bert.py
rename to bert/tokenization_bert.py
diff --git a/elia/bert/tokenization_utils.py b/bert/tokenization_utils.py
similarity index 100%
rename from elia/bert/tokenization_utils.py
rename to bert/tokenization_utils.py
diff --git a/elia/bert/tokenization_utils_base.py b/bert/tokenization_utils_base.py
similarity index 100%
rename from elia/bert/tokenization_utils_base.py
rename to bert/tokenization_utils_base.py
diff --git a/checkpoints/.test.py.swp b/checkpoints/.test.py.swp
new file mode 100644
index 0000000000000000000000000000000000000000..94ccf4c542156d44239742ea75e7168372cf8085
Binary files /dev/null and b/checkpoints/.test.py.swp differ
diff --git a/checkpoints/test.py b/checkpoints/test.py
new file mode 100644
index 0000000000000000000000000000000000000000..e782c9988784a4cd8c1d5d3fd8724bed14f95ac7
--- /dev/null
+++ b/checkpoints/test.py
@@ -0,0 +1,14 @@
+
+import torch
+
+model = torch.load('model_best_refcoco_0508.pth', map_location='cpu')
+
+print(model['model'].keys())
+
+new_dict = {}
+for k in model['model'].keys():
+ if 'image_model' in k or 'language_model' in k or 'classifier' in k:
+ new_dict[k] = model['model'][k]
+
+#torch.save('gradio.pth', new_dict)
+torch.save(new_dict, 'gradio.pth')
diff --git a/data/__pycache__/dataset_refer_bert.cpython-37.pyc b/data/__pycache__/dataset_refer_bert.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..fc1cd4c769860d1f40ade461f36d9d7cb0e68a7f
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert.cpython-37.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert.cpython-38.pyc b/data/__pycache__/dataset_refer_bert.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..f7c69b7ce42b267eea273fc1eed1bd1baba31653
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_aug.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_aug.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..c76576c13163642f5bdca1fa1dc9ff8b9c786c14
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_aug.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_cl.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_cl.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..35c1ddade26303f024905642489100c7aa35a3ff
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_cl.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_concat.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_concat.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..bc94a15ce0fb8be199291661477716432bd4c559
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_concat.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_hfai.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_hfai.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..0338b11ce23efd96087ee848dbad8c26fc9be5ea
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_hfai.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_mixed.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_mixed.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..fad17e438b67dccada5925f126c86af2f4ffead5
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_mixed.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_mlm.cpython-37.pyc b/data/__pycache__/dataset_refer_bert_mlm.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..4e12b0ad4971f62085b505faa69da499fad0ac3c
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_mlm.cpython-37.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_mlm.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_mlm.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..1d85c3f3ed484413e83a84414185f40ee5bc8d30
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_mlm.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_refcoco.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_refcoco.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..fb6e8336d1021eb13894249b7f924a6a5e1cb40d
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_refcoco.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_refcocog_umd.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_refcocog_umd.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..4ac6a230e8b755eb83830429ec50c57a31ee6fd9
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_refcocog_umd.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_refcocoplus.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_refcocoplus.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..75cf2b3415ee9512982f0e4bd4f2a80d4c9199f1
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_refcocoplus.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_returnidx.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_returnidx.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..8bc6d75c2086b44e16dd3999b3e73117cafc2d68
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_returnidx.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_save.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_save.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..dd7d80e53edc0ea1e51b4c334dd5752e1af5d8e8
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_save.cpython-38.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_vis.cpython-37.pyc b/data/__pycache__/dataset_refer_bert_vis.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..47ff74d240b445fc293224f4d3dd589ac74bf266
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_vis.cpython-37.pyc differ
diff --git a/data/__pycache__/dataset_refer_bert_vpd.cpython-38.pyc b/data/__pycache__/dataset_refer_bert_vpd.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..9153c2bb3b4296e9ec6fb48a07f63836881e0df0
Binary files /dev/null and b/data/__pycache__/dataset_refer_bert_vpd.cpython-38.pyc differ
diff --git a/data/dataset_refer_bert.py b/data/dataset_refer_bert.py
new file mode 100644
index 0000000000000000000000000000000000000000..a8465d326d933d3c46df17de7d45dd0eb0de8516
--- /dev/null
+++ b/data/dataset_refer_bert.py
@@ -0,0 +1,121 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+
+class ReferDataset(data.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ return img, target, tensor_embeddings, attention_mask
diff --git a/data/dataset_refer_bert_aug.py b/data/dataset_refer_bert_aug.py
new file mode 100644
index 0000000000000000000000000000000000000000..9ccd7d82d1ee3f7c2cf190517103ddbf5f50c68e
--- /dev/null
+++ b/data/dataset_refer_bert_aug.py
@@ -0,0 +1,225 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 40
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ return img, target, tensor_embeddings, attention_mask
diff --git a/data/dataset_refer_bert_cl.py b/data/dataset_refer_bert_cl.py
new file mode 100644
index 0000000000000000000000000000000000000000..28fa672bd3ef6f4d8edbc28de51df9cca0d564f2
--- /dev/null
+++ b/data/dataset_refer_bert_cl.py
@@ -0,0 +1,233 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+import pickle
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ with open('/ceph-jd/pub/jupyter/zhuangrongxian/notebooks/LAVT-RIS-bidirectional-refactor-mask2former/LAVT-RIS-fuckddp/refcoco_int8.pkl', 'rb') as handle:
+ self.mixed_masks = pickle.load(handle)
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ if self.split == 'val':
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ return img, target, tensor_embeddings, attention_mask, torch.tensor(self.mixed_masks[this_img_id[0]]['masks'])
diff --git a/data/dataset_refer_bert_concat.py b/data/dataset_refer_bert_concat.py
new file mode 100644
index 0000000000000000000000000000000000000000..6ae1cca1f14346a5877650894cc53f58026b56db
--- /dev/null
+++ b/data/dataset_refer_bert_concat.py
@@ -0,0 +1,253 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+
+_EXIF_ORIENT = 274
+
+
+from itertools import permutations
+
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ #self.max_tokens = 20
+ self.max_tokens = 40
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ # initializing list
+ test_list = ref['sentences']
+
+ # All possible concatenations in String List
+ # Using permutations() + loop
+ if not self.eval_mode:
+ test_list = [el['raw'] for el in test_list]
+ res = []
+
+ for k in test_list:
+ for l in test_list:
+ res.append(k + ', ' + l)
+ #for idx in range(1, len(test_list) + 1):
+ # temp.extend(list(permutations(test_list, idx)))
+ #res = []
+ #for ele in temp:
+ # res.append(", ".join(ele))
+ else:
+ print("eval mode")
+ test_list = [el['raw'] + ', ' + el ['raw'] for el in test_list]
+ res = test_list
+
+ #for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ for i, el in enumerate(res):
+ sentence_raw = el
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ return img, target, tensor_embeddings, attention_mask
diff --git a/data/dataset_refer_bert_hfai.py b/data/dataset_refer_bert_hfai.py
new file mode 100644
index 0000000000000000000000000000000000000000..2c0602a43b87e0e12480ad773691aa6d08940c5a
--- /dev/null
+++ b/data/dataset_refer_bert_hfai.py
@@ -0,0 +1,240 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+#class ReferDataset(data.Dataset):
+class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, indices):
+ #print(index)
+ #index = index[0]
+ img_list = []
+ target_list = []
+ tensor_embedding_list = []
+ attention_mask_list = []
+ for index in indices:
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+ img_list.append(img.unsqueeze(0))
+ target_list.append(target.unsqueeze(0))
+ tensor_embedding_list.append(tensor_embeddings.unsqueeze(0))
+ attention_mask_list.append(attention_mask.unsqueeze(0))
+
+ img_list = torch.cat(img_list, dim=0)
+ target_list = torch.cat(target_list, dim=0)
+ tensor_embedding_list = torch.cat(tensor_embedding_list, dim=0)
+ attention_mask_list = torch.cat(attention_mask_list, dim=0)
+ #print(img_list.shape, target_list.shape, tensor_embedding_list.shape, attention_mask_list.shape)
+ return img_list, target_list, tensor_embedding_list, attention_mask_list
+ #print(img.shape)
+ #return img, target, tensor_embeddings, attention_mask
diff --git a/data/dataset_refer_bert_mixed.py b/data/dataset_refer_bert_mixed.py
new file mode 100644
index 0000000000000000000000000000000000000000..a1c365a82333d96392c28ecdc112b0ac488bef9c
--- /dev/null
+++ b/data/dataset_refer_bert_mixed.py
@@ -0,0 +1,242 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+import pickle
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ if args.dataset == 'refcoco':
+ with open('/ceph-jd/pub/jupyter/zhuangrongxian/notebooks/LAVT-RIS-bidirectional-refactor-mask2former/LAVT-RIS-fuckddp/refcoco.pkl', 'rb') as handle:
+ self.mixed_masks = pickle.load(handle)
+ elif args.dataset == 'refcoco+':
+ with open('/ceph-jd/pub/jupyter/zhuangrongxian/notebooks/LAVT-RIS-bidirectional-refactor-mask2former/LAVT-RIS-fuckddp/refcoco+.pkl', 'rb') as handle:
+ self.mixed_masks = pickle.load(handle)
+ elif args.dataset == 'refcocog' and args.splitBy == 'umd':
+ with open('/ceph-jd/pub/jupyter/zhuangrongxian/notebooks/LAVT-RIS-bidirectional-refactor-mask2former/LAVT-RIS-fuckddp/refcocog_umd.pkl', 'rb') as handle:
+ self.mixed_masks = pickle.load(handle)
+ else:
+ raise ValueError('dataset not supported!')
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ if self.split == 'val':
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ return img, target, tensor_embeddings, attention_mask, torch.tensor(self.mixed_masks[this_img_id[0]]['masks'])
diff --git a/data/dataset_refer_bert_mlm.py b/data/dataset_refer_bert_mlm.py
new file mode 100644
index 0000000000000000000000000000000000000000..4b284f31ed06ff83420ab454339dfacc1ef46a5b
--- /dev/null
+++ b/data/dataset_refer_bert_mlm.py
@@ -0,0 +1,263 @@
+
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+#from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+#import ffrecord
+from copy import deepcopy
+
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False,
+ mlm_prob=0.15,
+ mlm_prob_mask=0.9,
+ mlm_prob_noise=0.0):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ self.mlm_prob = mlm_prob
+ self.mlm_prob_mask = mlm_prob_mask
+ self.mlm_prob_noise = mlm_prob_noise
+
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ #img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ #print(target.shape)
+ #print( np.argwhere(target.detach().cpu().numpy()).shape)
+ tmp = np.argwhere(target.detach().cpu().numpy())
+ centroid = tmp.mean(0)
+ #print(centroid)
+ centroid_x, centroid_y = int(centroid[1]), int(centroid[0])
+ #centroid_x, centroid_y = centroid[1], centroid[0]
+ position = torch.tensor([centroid_x, centroid_y]).float()
+ #print(centroid_x, centroid_y)
+
+ #print(input_ids.shape)
+
+
+
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ target_embeddings = deepcopy(tensor_embeddings)
+ mlm_mask = []
+ for j in range(tensor_embeddings.shape[1]):
+
+ prob = random.random()
+ if prob < self.mlm_prob:
+ mlm_mask.append(1)
+ prob /= self.mlm_prob
+ if prob < self.mlm_prob_mask:
+ tensor_embeddings[0][j] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token)
+ elif prob < self.mlm_prob_mask + self.mlm_prob_noise:
+ tensor_embeddings[0][j] = np.random.randint(len(self.tokenizer))
+ else:
+ mlm_mask.append(0)
+ mlm_mask = torch.tensor(mlm_mask).unsqueeze(0)
+
+ #pos_ids = self.tokenizer.encode(text="{:d} {:d}".format(centroid_x, centroid_y), add_special_tokens=True)
+ #print(attention_mask)
+ #print(attention_mask.shape)
+
+ return img, target, tensor_embeddings, attention_mask, target_embeddings, mlm_mask, position
diff --git a/data/dataset_refer_bert_refcoco.py b/data/dataset_refer_bert_refcoco.py
new file mode 100644
index 0000000000000000000000000000000000000000..9fb14d1414d7891c2bc6a84f897ee37314d53300
--- /dev/null
+++ b/data/dataset_refer_bert_refcoco.py
@@ -0,0 +1,233 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+import pickle
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ with open('/ceph-jd/pub/jupyter/zhuangrongxian/notebooks/LAVT-RIS-bidirectional-refactor-mask2former/LAVT-RIS-fuckddp/refcoco.pkl', 'rb') as handle:
+ self.mixed_masks = pickle.load(handle)
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ if self.split == 'val':
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ return img, target, tensor_embeddings, attention_mask, torch.tensor(self.mixed_masks[this_img_id[0]]['masks'])
diff --git a/data/dataset_refer_bert_refcocog_umd.py b/data/dataset_refer_bert_refcocog_umd.py
new file mode 100644
index 0000000000000000000000000000000000000000..ef46dc8e19e3c58f34100032a065f59cf67e3e84
--- /dev/null
+++ b/data/dataset_refer_bert_refcocog_umd.py
@@ -0,0 +1,233 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+import pickle
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ with open('/ceph-jd/pub/jupyter/zhuangrongxian/notebooks/LAVT-RIS-bidirectional-refactor-mask2former/LAVT-RIS-fuckddp/refcocog_umd.pkl', 'rb') as handle:
+ self.mixed_masks = pickle.load(handle)
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ if self.split == 'val':
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ return img, target, tensor_embeddings, attention_mask, torch.tensor(self.mixed_masks[this_img_id[0]]['masks'])
diff --git a/data/dataset_refer_bert_refcocoplus.py b/data/dataset_refer_bert_refcocoplus.py
new file mode 100644
index 0000000000000000000000000000000000000000..87e183d9b58d67282e21b870613ea1a9ccd2e08e
--- /dev/null
+++ b/data/dataset_refer_bert_refcocoplus.py
@@ -0,0 +1,233 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+import pickle
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ with open('/ceph-jd/pub/jupyter/zhuangrongxian/notebooks/LAVT-RIS-bidirectional-refactor-mask2former/LAVT-RIS-fuckddp/refcoco+.pkl', 'rb') as handle:
+ self.mixed_masks = pickle.load(handle)
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ if self.split == 'val':
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ return img, target, tensor_embeddings, attention_mask, torch.tensor(self.mixed_masks[this_img_id[0]]['masks'])
diff --git a/data/dataset_refer_bert_refcocoplus_etf.py b/data/dataset_refer_bert_refcocoplus_etf.py
new file mode 100644
index 0000000000000000000000000000000000000000..efba7439460aee0b0139d591cab9aa1d140fce13
--- /dev/null
+++ b/data/dataset_refer_bert_refcocoplus_etf.py
@@ -0,0 +1,251 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+import pickle
+
+def generate_random_orthogonal_matrix(num_classes, feat_in=768):
+ a = np.random.random(size=(feat_in, num_classes))
+ #print(a.shape)
+ P, _ = np.linalg.qr(a)
+ #n, m = num_classes, feat_in
+
+ #H = np.random.randn(n, m)
+ #P, s, vh = np.linalg.svd(H, full_matrices=False)
+ #mat = u @ vh
+ P = torch.tensor(P).float()
+
+ #assert torch.allclose(torch.matmul(P.T, P), torch.eye(num_classes), atol=1e-07), torch.max(torch.abs(torch.matmul(P.T, P) - torch.eye(num_classes)))
+ return P
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ print(len(self.keys()))
+ self.orthogonal_features = generate_random_orthogonal_matrix(num_classes=len(self.keys()))
+
+ with open('/ceph-jd/pub/jupyter/zhuangrongxian/notebooks/LAVT-RIS-bidirectional-refactor-mask2former/LAVT-RIS-fuckddp/refcoco+.pkl', 'rb') as handle:
+ self.mixed_masks = pickle.load(handle)
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ orthogonal_feature = self.orthogonal_features(self.keys[this_img_id[0]])
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ if self.split == 'val':
+ return img, target, tensor_embeddings, attention_mask
+ else:
+ return img, target, tensor_embeddings, attention_mask, torch.tensor(self.mixed_masks[this_img_id[0]]['masks']), orthogonal_feature
diff --git a/data/dataset_refer_bert_returnidx.py b/data/dataset_refer_bert_returnidx.py
new file mode 100644
index 0000000000000000000000000000000000000000..3f7639528b4b73ac19dd1c829032d84212472f60
--- /dev/null
+++ b/data/dataset_refer_bert_returnidx.py
@@ -0,0 +1,229 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ self.total_images = len(self.ref_ids)
+ #print('total keys', len(self.keys))
+ print('total images', len(self.ref_ids))
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ return img, target, tensor_embeddings, attention_mask, self.keys[this_img_id[0]]
diff --git a/data/dataset_refer_bert_save.py b/data/dataset_refer_bert_save.py
new file mode 100644
index 0000000000000000000000000000000000000000..ae2b40ccf18f9e7e9e882baab1b444cf59d288db
--- /dev/null
+++ b/data/dataset_refer_bert_save.py
@@ -0,0 +1,225 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ return img, target, tensor_embeddings, attention_mask, this_img_id[0]
diff --git a/data/dataset_refer_bert_vis.py b/data/dataset_refer_bert_vis.py
new file mode 100644
index 0000000000000000000000000000000000000000..ebaf486687f2a3f2c8787cfd9bcc993868bfbfc8
--- /dev/null
+++ b/data/dataset_refer_bert_vis.py
@@ -0,0 +1,130 @@
+
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+
+class ReferDataset(data.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 20
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.raw_sentences = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+ raw_sentences_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+ raw_sentences_for_ref.append(sentence_raw)
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+ self.raw_sentences.append(raw_sentences_for_ref)
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ orig_img = np.array(img)
+ #orig_shape = np.array(img).shape
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ raw_sentence = self.raw_sentences[index]
+
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ return img, target, tensor_embeddings, attention_mask, raw_sentence, this_img['file_name'], orig_img
diff --git a/data/dataset_refer_bert_vpd.py b/data/dataset_refer_bert_vpd.py
new file mode 100644
index 0000000000000000000000000000000000000000..697989ca47ebfcd7e49b96bc86eac3768ced4ba0
--- /dev/null
+++ b/data/dataset_refer_bert_vpd.py
@@ -0,0 +1,226 @@
+import os
+import sys
+import torch.utils.data as data
+import torch
+from torchvision import transforms
+from torch.autograd import Variable
+import numpy as np
+from PIL import Image
+import torchvision.transforms.functional as TF
+import random
+
+from bert.tokenization_bert import BertTokenizer
+
+import h5py
+from refer.refer import REFER
+
+from args import get_parser
+
+# Dataset configuration initialization
+parser = get_parser()
+args = parser.parse_args()
+
+from hfai.datasets import CocoDetection
+
+from PIL import Image
+import numpy as np
+#from ffrecord.torch import DataLoader,Dataset
+import ffrecord
+
+_EXIF_ORIENT = 274
+def _apply_exif_orientation(image):
+ """
+ Applies the exif orientation correctly.
+
+ This code exists per the bug:
+ https://github.com/python-pillow/Pillow/issues/3973
+ with the function `ImageOps.exif_transpose`. The Pillow source raises errors with
+ various methods, especially `tobytes`
+
+ Function based on:
+ https://github.com/wkentaro/labelme/blob/v4.5.4/labelme/utils/image.py#L59
+ https://github.com/python-pillow/Pillow/blob/7.1.2/src/PIL/ImageOps.py#L527
+
+ Args:
+ image (PIL.Image): a PIL image
+
+ Returns:
+ (PIL.Image): the PIL image with exif orientation applied, if applicable
+ """
+ if not hasattr(image, "getexif"):
+ return image
+
+ try:
+ exif = image.getexif()
+ except Exception: # https://github.com/facebookresearch/detectron2/issues/1885
+ exif = None
+
+ if exif is None:
+ return image
+
+ orientation = exif.get(_EXIF_ORIENT)
+
+ method = {
+ 2: Image.FLIP_LEFT_RIGHT,
+ 3: Image.ROTATE_180,
+ 4: Image.FLIP_TOP_BOTTOM,
+ 5: Image.TRANSPOSE,
+ 6: Image.ROTATE_270,
+ 7: Image.TRANSVERSE,
+ 8: Image.ROTATE_90,
+ }.get(orientation)
+
+ if method is not None:
+ return image.transpose(method)
+ return image
+
+def convert_PIL_to_numpy(image, format):
+ """
+ Convert PIL image to numpy array of target format.
+
+ Args:
+ image (PIL.Image): a PIL image
+ format (str): the format of output image
+
+ Returns:
+ (np.ndarray): also see `read_image`
+ """
+ if format is not None:
+ # PIL only supports RGB, so convert to RGB and flip channels over below
+ conversion_format = format
+ if format in ["BGR", "YUV-BT.601"]:
+ conversion_format = "RGB"
+ image = image.convert(conversion_format)
+ image = np.asarray(image)
+ # PIL squeezes out the channel dimension for "L", so make it HWC
+ if format == "L":
+ image = np.expand_dims(image, -1)
+
+ # handle formats not supported by PIL
+ elif format == "BGR":
+ # flip channels if needed
+ image = image[:, :, ::-1]
+ elif format == "YUV-BT.601":
+ image = image / 255.0
+ image = np.dot(image, np.array(_M_RGB2YUV).T)
+
+ return image
+
+class ReferDataset(data.Dataset):
+#class ReferDataset(ffrecord.torch.Dataset):
+
+ def __init__(self,
+ args,
+ image_transforms=None,
+ target_transforms=None,
+ split='train',
+ eval_mode=False):
+
+ self.classes = []
+ self.image_transforms = image_transforms
+ self.target_transform = target_transforms
+ self.split = split
+ self.refer = REFER(args.refer_data_root, args.dataset, args.splitBy)
+
+ self.max_tokens = 77
+ print("max tokens", self.max_tokens)
+
+ ref_ids = self.refer.getRefIds(split=self.split)
+ img_ids = self.refer.getImgIds(ref_ids)
+
+ all_imgs = self.refer.Imgs
+ self.imgs = list(all_imgs[i] for i in img_ids)
+ self.ref_ids = ref_ids
+
+ self.input_ids = []
+ self.attention_masks = []
+ self.tokenizer = BertTokenizer.from_pretrained(args.bert_tokenizer)
+
+ self.eval_mode = eval_mode
+ # if we are testing on a dataset, test all sentences of an object;
+ # o/w, we are validating during training, randomly sample one sentence for efficiency
+ for r in ref_ids:
+ ref = self.refer.Refs[r]
+
+ sentences_for_ref = []
+ attentions_for_ref = []
+
+ for i, (el, sent_id) in enumerate(zip(ref['sentences'], ref['sent_ids'])):
+ sentence_raw = el['raw']
+ attention_mask = [0] * self.max_tokens
+ padded_input_ids = [0] * self.max_tokens
+
+ input_ids = self.tokenizer.encode(text=sentence_raw, add_special_tokens=True)
+
+ # truncation of tokens
+ input_ids = input_ids[:self.max_tokens]
+
+ padded_input_ids[:len(input_ids)] = input_ids
+ attention_mask[:len(input_ids)] = [1]*len(input_ids)
+
+ sentences_for_ref.append(torch.tensor(padded_input_ids).unsqueeze(0))
+ attentions_for_ref.append(torch.tensor(attention_mask).unsqueeze(0))
+
+ self.input_ids.append(sentences_for_ref)
+ self.attention_masks.append(attentions_for_ref)
+
+ split = 'train'
+ print(split)
+ self.hfai_dataset = CocoDetection(split, transform=None)
+ self.keys = {}
+ for i in range(len(self.hfai_dataset.reader.ids)):
+ self.keys[self.hfai_dataset.reader.ids[i]] = i
+
+ def get_classes(self):
+ return self.classes
+
+ def __len__(self):
+ return len(self.ref_ids)
+
+ def __getitem__(self, index):
+ #print(index)
+ #index = index[0]
+ this_ref_id = self.ref_ids[index]
+ this_img_id = self.refer.getImgIds(this_ref_id)
+ this_img = self.refer.Imgs[this_img_id[0]]
+
+ #print("this_ref_id", this_ref_id)
+ #print("this_img_id", this_img_id)
+ #print("this_img", this_img)
+ #img = Image.open(os.path.join(self.refer.IMAGE_DIR, this_img['file_name'])).convert("RGB")
+ img = self.hfai_dataset.reader.read_imgs([self.keys[this_img_id[0]]])[0]
+ img = _apply_exif_orientation(img)
+ img = convert_PIL_to_numpy(img, 'RGB')
+ #print(img.shape)
+ img = Image.fromarray(img)
+
+ ref = self.refer.loadRefs(this_ref_id)
+
+ ref_mask = np.array(self.refer.getMask(ref[0])['mask'])
+ annot = np.zeros(ref_mask.shape)
+ annot[ref_mask == 1] = 1
+
+ annot = Image.fromarray(annot.astype(np.uint8), mode="P")
+
+ if self.image_transforms is not None:
+ # resize, from PIL to tensor, and mean and std normalization
+ img, target = self.image_transforms(img, annot)
+
+ if self.eval_mode:
+ embedding = []
+ att = []
+ for s in range(len(self.input_ids[index])):
+ e = self.input_ids[index][s]
+ a = self.attention_masks[index][s]
+ embedding.append(e.unsqueeze(-1))
+ att.append(a.unsqueeze(-1))
+
+ tensor_embeddings = torch.cat(embedding, dim=-1)
+ attention_mask = torch.cat(att, dim=-1)
+ else:
+ choice_sent = np.random.choice(len(self.input_ids[index]))
+ tensor_embeddings = self.input_ids[index][choice_sent]
+ attention_mask = self.attention_masks[index][choice_sent]
+
+ #print(img.shape)
+ return img, target, tensor_embeddings, attention_mask
diff --git a/demo/dish_with_food.jpg b/demo/dish_with_food.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..01e81925438e183d5d45196ea4be460bbec03b5a
Binary files /dev/null and b/demo/dish_with_food.jpg differ
diff --git a/demo/empty_dish.jpg b/demo/empty_dish.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..9cf78cc35767b55c49ae9cfd21176505f3fb8398
Binary files /dev/null and b/demo/empty_dish.jpg differ
diff --git a/demo/relax_bottle.jpg b/demo/relax_bottle.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..0629d620c14e1dd94f58d4ce8aaea3adcd8a43b1
Binary files /dev/null and b/demo/relax_bottle.jpg differ
diff --git a/demo/spoon_on_the_dish.jpg b/demo/spoon_on_the_dish.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..138b391e6f46172917aca1be0f339cc15f502a06
Binary files /dev/null and b/demo/spoon_on_the_dish.jpg differ
diff --git a/elia/demo_inference.py b/demo_inference.py
similarity index 99%
rename from elia/demo_inference.py
rename to demo_inference.py
index 630be118942e8ebc5a9bf6b13df650683f4e4ae8..fa3ceb4c9e42f94a46d8c650a597d6e1f76f40c1 100644
--- a/elia/demo_inference.py
+++ b/demo_inference.py
@@ -1,6 +1,6 @@
image_path = './image001.png'
sentence = 'spoon on the dish'
-weights = '/cluster/nvme4/cyx/lavt/vis/model_best_refcoco_0508.pth'
+weights = 'checkpoints/gradio.pth'
device = 'cpu'
# pre-process the input image
diff --git a/elia/.gitattributes b/elia/.gitattributes
deleted file mode 100644
index c7d9f3332a950355d5a77d85000f05e6f45435ea..0000000000000000000000000000000000000000
--- a/elia/.gitattributes
+++ /dev/null
@@ -1,34 +0,0 @@
-*.7z filter=lfs diff=lfs merge=lfs -text
-*.arrow filter=lfs diff=lfs merge=lfs -text
-*.bin filter=lfs diff=lfs merge=lfs -text
-*.bz2 filter=lfs diff=lfs merge=lfs -text
-*.ckpt filter=lfs diff=lfs merge=lfs -text
-*.ftz filter=lfs diff=lfs merge=lfs -text
-*.gz filter=lfs diff=lfs merge=lfs -text
-*.h5 filter=lfs diff=lfs merge=lfs -text
-*.joblib filter=lfs diff=lfs merge=lfs -text
-*.lfs.* filter=lfs diff=lfs merge=lfs -text
-*.mlmodel filter=lfs diff=lfs merge=lfs -text
-*.model filter=lfs diff=lfs merge=lfs -text
-*.msgpack filter=lfs diff=lfs merge=lfs -text
-*.npy filter=lfs diff=lfs merge=lfs -text
-*.npz filter=lfs diff=lfs merge=lfs -text
-*.onnx filter=lfs diff=lfs merge=lfs -text
-*.ot filter=lfs diff=lfs merge=lfs -text
-*.parquet filter=lfs diff=lfs merge=lfs -text
-*.pb filter=lfs diff=lfs merge=lfs -text
-*.pickle filter=lfs diff=lfs merge=lfs -text
-*.pkl filter=lfs diff=lfs merge=lfs -text
-*.pt filter=lfs diff=lfs merge=lfs -text
-*.pth filter=lfs diff=lfs merge=lfs -text
-*.rar filter=lfs diff=lfs merge=lfs -text
-*.safetensors filter=lfs diff=lfs merge=lfs -text
-saved_model/**/* filter=lfs diff=lfs merge=lfs -text
-*.tar.* filter=lfs diff=lfs merge=lfs -text
-*.tflite filter=lfs diff=lfs merge=lfs -text
-*.tgz filter=lfs diff=lfs merge=lfs -text
-*.wasm filter=lfs diff=lfs merge=lfs -text
-*.xz filter=lfs diff=lfs merge=lfs -text
-*.zip filter=lfs diff=lfs merge=lfs -text
-*.zst filter=lfs diff=lfs merge=lfs -text
-*tfevents* filter=lfs diff=lfs merge=lfs -text
diff --git a/elia/README.md b/elia/README.md
deleted file mode 100644
index 869de8afb5569dca5c730028fe17228f54198bef..0000000000000000000000000000000000000000
--- a/elia/README.md
+++ /dev/null
@@ -1,222 +0,0 @@
-# LAVT: Language-Aware Vision Transformer for Referring Image Segmentation
-Welcome to the official repository for the method presented in
-"LAVT: Language-Aware Vision Transformer for Referring Image Segmentation."
-
-
-![Pipeline Image](pipeline.jpg)
-
-Code in this repository is written using [PyTorch](https://pytorch.org/) and is organized in the following way (assuming the working directory is the root directory of this repository):
-* `./lib` contains files implementing the main network.
-* Inside `./lib`, `_utils.py` defines the highest-level model, which incorporates the backbone network
-defined in `backbone.py` and the simple mask decoder defined in `mask_predictor.py`.
-`segmentation.py` provides the model interface and initialization functions.
-* `./bert` contains files migrated from [Hugging Face Transformers v3.0.2](https://huggingface.co/transformers/v3.0.2/quicktour.html),
-which implement the BERT language model.
-We used Transformers v3.0.2 during development but it had a bug that would appear when using `DistributedDataParallel`.
-Therefore we maintain a copy of the relevant source files in this repository.
-This way, the bug is fixed and code in this repository is self-contained.
-* `./train.py` is invoked to train the model.
-* `./test.py` is invoked to run inference on the evaluation subsets after training.
-* `./refer` contains data pre-processing code and is also where data should be placed, including the images and all annotations.
-It is cloned from [refer](https://github.com/lichengunc/refer).
-* `./data/dataset_refer_bert.py` is where the dataset class is defined.
-* `./utils.py` defines functions that track training statistics and setup
-functions for `DistributedDataParallel`.
-
-
-## Updates
-**June 21st, 2022**. Uploaded the training logs and trained
-model weights of lavt_one.
-
-**June 9th, 2022**.
-Added a more efficient implementation of LAVT.
-* To train this new model, specify `--model` as `lavt_one`
-(and `lavt` is still valid for specifying the old model).
-The rest of the configuration stays unchanged.
-* The difference between this version and the previous one
-is that the language model has been moved inside the overall model,
-so that `DistributedDataParallel` needs to be applied only once.
-Applying it twice (on the standalone language model and the main branch)
-as done in the old implementation led to low GPU utility,
-which prevented scaling up training speed with more GPUs.
-We recommend training this model on 8 GPUs
-(and same as before with batch size 32).
-
-## Setting Up
-### Preliminaries
-The code has been verified to work with PyTorch v1.7.1 and Python 3.7.
-1. Clone this repository.
-2. Change directory to root of this repository.
-### Package Dependencies
-1. Create a new Conda environment with Python 3.7 then activate it:
-```shell
-conda create -n lavt python==3.7
-conda activate lavt
-```
-
-2. Install PyTorch v1.7.1 with a CUDA version that works on your cluster/machine (CUDA 10.2 is used in this example):
-```shell
-conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.2 -c pytorch
-```
-
-3. Install the packages in `requirements.txt` via `pip`:
-```shell
-pip install -r requirements.txt
-```
-
-### Datasets
-1. Follow instructions in the `./refer` directory to set up subdirectories
-and download annotations.
-This directory is a git clone (minus two data files that we do not need)
-from the [refer](https://github.com/lichengunc/refer) public API.
-
-2. Download images from [COCO](https://cocodataset.org/#download).
-Please use the first downloading link *2014 Train images [83K/13GB]*, and extract
-the downloaded `train_2014.zip` file to `./refer/data/images/mscoco/images`.
-
-### The Initialization Weights for Training
-1. Create the `./pretrained_weights` directory where we will be storing the weights.
-```shell
-mkdir ./pretrained_weights
-```
-2. Download [pre-trained classification weights of
-the Swin Transformer](https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_base_patch4_window12_384_22k.pth),
-and put the `pth` file in `./pretrained_weights`.
-These weights are needed for training to initialize the model.
-
-### Trained Weights of LAVT for Testing
-1. Create the `./checkpoints` directory where we will be storing the weights.
-```shell
-mkdir ./checkpoints
-```
-2. Download LAVT model weights (which are stored on Google Drive) using links below and put them in `./checkpoints`.
-
-| [RefCOCO](https://drive.google.com/file/d/13D-OeEOijV8KTC3BkFP-gOJymc6DLwVT/view?usp=sharing) | [RefCOCO+](https://drive.google.com/file/d/1B8Q44ZWsc8Pva2xD_M-KFh7-LgzeH2-2/view?usp=sharing) | [G-Ref (UMD)](https://drive.google.com/file/d/1BjUnPVpALurkGl7RXXvQiAHhA-gQYKvK/view?usp=sharing) | [G-Ref (Google)](https://drive.google.com/file/d/1weiw5UjbPfo3tCBPfB8tu6xFXCUG16yS/view?usp=sharing) |
-|---|---|---|---|
-
-3. Model weights and training logs of the new lavt_one implementation are below.
-
-| RefCOCO | RefCOCO+ | G-Ref (UMD) | G-Ref (Google) |
-|:-----:|:-----:|:-----:|:-----:|
-|[log](https://drive.google.com/file/d/1YIojIHqe3bxxsWOltifa2U9jH67hPHLM/view?usp=sharing) | [weights](https://drive.google.com/file/d/1xFMEXr6AGU97Ypj1yr8oo00uObbeIQvJ/view?usp=sharing)|[log](https://drive.google.com/file/d/1Z34T4gEnWlvcSUQya7txOuM0zdLK7MRT/view?usp=sharing) | [weights](https://drive.google.com/file/d/1HS8ZnGaiPJr-OmoUn4-4LVnVtD_zHY6w/view?usp=sharing)|[log](https://drive.google.com/file/d/14VAgahngOV8NA6noLZCqDoqaUrlW14v8/view?usp=sharing) | [weights](https://drive.google.com/file/d/14g8NzgZn6HzC6tP_bsQuWmh5LnOcovsE/view?usp=sharing)|[log](https://drive.google.com/file/d/1JBXfmlwemWSvs92Rky0TlHcVuuLpt4Da/view?usp=sharing) | [weights](https://drive.google.com/file/d/1IJeahFVLgKxu_BVmWacZs3oUzgTCeWcz/view?usp=sharing)|
-
-* The Prec@K, overall IoU and mean IoU numbers in the training logs will differ
-from the final results obtained by running `test.py`,
-because only one out of multiple annotated expressions is
-randomly selected and evaluated for each object during training.
-But these numbers give a good idea about the test performance.
-The two should be fairly close.
-
-
-## Training
-We use `DistributedDataParallel` from PyTorch.
-The released `lavt` weights were trained using 4 x 32G V100 cards (max mem on each card was about 26G).
-The released `lavt_one` weights were trained using 8 x 32G V100 cards (max mem on each card was about 13G).
-Using more cards was to accelerate training.
-To run on 4 GPUs (with IDs 0, 1, 2, and 3) on a single node:
-```shell
-mkdir ./models
-
-mkdir ./models/refcoco
-CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcoco --model_id refcoco --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/refcoco/output
-
-mkdir ./models/refcoco+
-CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcoco+ --model_id refcoco+ --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/refcoco+/output
-
-mkdir ./models/gref_umd
-CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcocog --splitBy umd --model_id gref_umd --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/gref_umd/output
-
-mkdir ./models/gref_google
-CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 12345 train.py --model lavt --dataset refcocog --splitBy google --model_id gref_google --batch-size 8 --lr 0.00005 --wd 1e-2 --swin_type base --pretrained_swin_weights ./pretrained_weights/swin_base_patch4_window12_384_22k.pth --epochs 40 --img_size 480 2>&1 | tee ./models/gref_google/output
-```
-* *--model* is a pre-defined model name. Options include `lavt` and `lavt_one`. See [Updates](#updates).
-* *--dataset* is the dataset name. One can choose from `refcoco`, `refcoco+`, and `refcocog`.
-* *--splitBy* needs to be specified if and only if the dataset is G-Ref (which is also called RefCOCOg).
-`umd` identifies the UMD partition and `google` identifies the Google partition.
-* *--model_id* is the model name one should define oneself (*e.g.*, customize it to contain training/model configurations, dataset information, experiment IDs, *etc*.).
-It is used in two ways: Training log will be saved as `./models/[args.model_id]/output` and the best checkpoint will be saved as `./checkpoints/model_best_[args.model_id].pth`.
-* *--swin_type* specifies the version of the Swin Transformer.
-One can choose from `tiny`, `small`, `base`, and `large`. The default is `base`.
-* *--pretrained_swin_weights* specifies the path to pre-trained Swin Transformer weights used for model initialization.
-* Note that currently we need to manually create the `./models/[args.model_id]` directory via `mkdir` before running `train.py`.
-This is because we use `tee` to redirect `stdout` and `stderr` to `./models/[args.model_id]/output` for logging.
-This is a nuisance and should be resolved in the future, *i.e.*, using a proper logger or a bash script for initiating training.
-
-## Testing
-For RefCOCO/RefCOCO+, run one of
-```shell
-python test.py --model lavt --swin_type base --dataset refcoco --split val --resume ./checkpoints/refcoco.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
-python test.py --model lavt --swin_type base --dataset refcoco+ --split val --resume ./checkpoints/refcoco+.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
-```
-* *--split* is the subset to evaluate, and one can choose from `val`, `testA`, and `testB`.
-* *--resume* is the path to the weights of a trained model.
-
-For G-Ref (UMD)/G-Ref (Google), run one of
-```shell
-python test.py --model lavt --swin_type base --dataset refcocog --splitBy umd --split val --resume ./checkpoints/gref_umd.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
-python test.py --model lavt --swin_type base --dataset refcocog --splitBy google --split val --resume ./checkpoints/gref_google.pth --workers 4 --ddp_trained_weights --window12 --img_size 480
-```
-* *--splitBy* specifies the partition to evaluate.
-One can choose from `umd` or `google`.
-* *--split* is the subset (according to the specified partition) to evaluate, and one can choose from `val` and `test` for the UMD partition, and only `val` for the Google partition..
-* *--resume* is the path to the weights of a trained model.
-
-## Results
-The complete test results of the released LAVT models are summarized as follows:
-
-| Dataset | P@0.5 | P@0.6 | P@0.7 | P@0.8 | P@0.9 | Overall IoU | Mean IoU |
-|:---------------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----------:|:--------:|
-| RefCOCO val | 84.46 | 80.90 | 75.28 | 64.71 | 34.30 | 72.73 | 74.46 |
-| RefCOCO test A | 88.07 | 85.17 | 79.90 | 68.52 | 35.69 | 75.82 | 76.89 |
-| RefCOCO test B | 79.12 | 74.94 | 69.17 | 59.37 | 34.45 | 68.79 | 70.94 |
-| RefCOCO+ val | 74.44 | 70.91 | 65.58 | 56.34 | 30.23 | 62.14 | 65.81 |
-| RefCOCO+ test A | 80.68 | 77.96 | 72.90 | 62.21 | 32.36 | 68.38 | 70.97 |
-| RefCOCO+ test B | 65.66 | 61.85 | 55.94 | 47.56 | 27.24 | 55.10 | 59.23 |
-| G-Ref val (UMD) | 70.81 | 65.28 | 58.60 | 47.49 | 22.73 | 61.24 | 63.34 |
-| G-Ref test (UMD)| 71.54 | 66.38 | 59.00 | 48.21 | 23.10 | 62.09 | 63.62 |
-|G-Ref val (Goog.)| 71.16 | 67.21 | 61.76 | 51.98 | 27.30 | 60.50 | 63.66 |
-
-We have validated LAVT on RefCOCO with multiple runs.
-The overall IoU on the val set generally lies in the range of 72.73±0.5%.
-
-
-## Demo: Try LAVT on Your Own Image-text Pairs!
-One can run inference on a custom image-text pair
-and visualize the result by running the script `./demo_inference.py`.
-Choose your photos and expessions and have fun.
-
-
-## Citing LAVT
-```
-@inproceedings{yang2022lavt,
- title={LAVT: Language-Aware Vision Transformer for Referring Image Segmentation},
- author={Yang, Zhao and Wang, Jiaqi and Tang, Yansong and Chen, Kai and Zhao, Hengshuang and Torr, Philip HS},
- booktitle={CVPR},
- year={2022}
-}
-```
-
-
-## Contributing
-We appreciate all contributions.
-It helps the project if you could
-- report issues you are facing,
-- give a :+1: on issues reported by others that are relevant to you,
-- answer issues reported by others for which you have found solutions,
-- and implement helpful new features or improve the code otherwise with pull requests.
-
-## Acknowledgements
-Code in this repository is built upon several public repositories.
-Specifically,
-* data pre-processing leverages the [refer](https://github.com/lichengunc/refer) repository,
-* the backbone model is implemented based on code from [Swin Transformer for Semantic Segmentation](https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation),
-* the training and testing pipelines are adapted from [RefVOS](https://github.com/miriambellver/refvos),
-* and implementation of the BERT model (files in the bert directory) is from [Hugging Face Transformers v3.0.2](https://github.com/huggingface/transformers/tree/v3.0.2)
-(we migrated over the relevant code to fix a bug and simplify the installation process).
-
-Some of these repositories in turn adapt code from [OpenMMLab](https://github.com/open-mmlab) and [TorchVision](https://github.com/pytorch/vision).
-We'd like to thank the authors/organizations of these repositories for open sourcing their projects.
-
-
-## License
-GNU GPLv3
diff --git a/lib/__pycache__/_utils.cpython-37.pyc b/lib/__pycache__/_utils.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..56b0bb9799e8279de25a81fd7e935fdcb1ad89c3
Binary files /dev/null and b/lib/__pycache__/_utils.cpython-37.pyc differ
diff --git a/lib/__pycache__/_utils.cpython-38.pyc b/lib/__pycache__/_utils.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..c377cf49d47e0a5313dc70d5a64401d7c6f6a629
Binary files /dev/null and b/lib/__pycache__/_utils.cpython-38.pyc differ
diff --git a/lib/__pycache__/backbone.cpython-37.pyc b/lib/__pycache__/backbone.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..d47d1770ae02f3e15734ac32c340d40b02e6f4c9
Binary files /dev/null and b/lib/__pycache__/backbone.cpython-37.pyc differ
diff --git a/lib/__pycache__/backbone_ppm.cpython-37.pyc b/lib/__pycache__/backbone_ppm.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..213c8a82c6cf0bb9dc1b86a03a26b7ed88b2f6ee
Binary files /dev/null and b/lib/__pycache__/backbone_ppm.cpython-37.pyc differ
diff --git a/lib/__pycache__/backbone_ppm.cpython-38.pyc b/lib/__pycache__/backbone_ppm.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..286043db6e23f60509b317f5f2d06d9ce6a20944
Binary files /dev/null and b/lib/__pycache__/backbone_ppm.cpython-38.pyc differ
diff --git a/lib/__pycache__/mask_predictor.cpython-37.pyc b/lib/__pycache__/mask_predictor.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..ec3f92818f5026781f92261b24cb9947cf2864bc
Binary files /dev/null and b/lib/__pycache__/mask_predictor.cpython-37.pyc differ
diff --git a/lib/__pycache__/mask_predictor.cpython-38.pyc b/lib/__pycache__/mask_predictor.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..ce68d17a6ed2da54c8c193caf0dcf52a3feb6f48
Binary files /dev/null and b/lib/__pycache__/mask_predictor.cpython-38.pyc differ
diff --git a/lib/__pycache__/multimodal_segmentation_ppm.cpython-37.pyc b/lib/__pycache__/multimodal_segmentation_ppm.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..7956edc123e99af5d24940caed2ef486b86608bf
Binary files /dev/null and b/lib/__pycache__/multimodal_segmentation_ppm.cpython-37.pyc differ
diff --git a/lib/__pycache__/multimodal_segmentation_ppm.cpython-38.pyc b/lib/__pycache__/multimodal_segmentation_ppm.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..721c50ebb9b2205a010fb6ed28466f41eb7ab4f1
Binary files /dev/null and b/lib/__pycache__/multimodal_segmentation_ppm.cpython-38.pyc differ
diff --git a/lib/__pycache__/multimodal_swin_ppm.cpython-37.pyc b/lib/__pycache__/multimodal_swin_ppm.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..552b6240761a06c61bf474ffcb7bcb80b5bcd697
Binary files /dev/null and b/lib/__pycache__/multimodal_swin_ppm.cpython-37.pyc differ
diff --git a/lib/__pycache__/multimodal_swin_ppm.cpython-38.pyc b/lib/__pycache__/multimodal_swin_ppm.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..b1577445e4e96a9049c47be509d4c2536f4fe16a
Binary files /dev/null and b/lib/__pycache__/multimodal_swin_ppm.cpython-38.pyc differ
diff --git a/lib/__pycache__/segmentation.cpython-37.pyc b/lib/__pycache__/segmentation.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..d59de9237092fb2e059dea08afbac92009a66733
Binary files /dev/null and b/lib/__pycache__/segmentation.cpython-37.pyc differ
diff --git a/lib/_utils.py b/lib/_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..d85bacd640a8ee196d185e953342a19c211d0bad
--- /dev/null
+++ b/lib/_utils.py
@@ -0,0 +1,56 @@
+from collections import OrderedDict
+import sys
+import torch
+from torch import nn
+from torch.nn import functional as F
+from bert.modeling_bert import BertModel
+
+
+class _LAVTSimpleDecode(nn.Module):
+ def __init__(self, backbone, classifier):
+ super(_LAVTSimpleDecode, self).__init__()
+ self.backbone = backbone
+ self.classifier = classifier
+
+ def forward(self, x, l_feats, l_mask):
+ input_shape = x.shape[-2:]
+ features = self.backbone(x, l_feats, l_mask)
+ x_c1, x_c2, x_c3, x_c4 = features
+ x = self.classifier(x_c4, x_c3, x_c2, x_c1)
+ x = F.interpolate(x, size=input_shape, mode='bilinear', align_corners=True)
+
+ return x
+
+
+class LAVT(_LAVTSimpleDecode):
+ pass
+
+
+###############################################
+# LAVT One: put BERT inside the overall model #
+###############################################
+class _LAVTOneSimpleDecode(nn.Module):
+ def __init__(self, backbone, classifier, args):
+ super(_LAVTOneSimpleDecode, self).__init__()
+ self.backbone = backbone
+ self.classifier = classifier
+ self.text_encoder = BertModel.from_pretrained(args.ck_bert)
+ self.text_encoder.pooler = None
+
+ def forward(self, x, text, l_mask):
+ input_shape = x.shape[-2:]
+ ### language inference ###
+ l_feats = self.text_encoder(text, attention_mask=l_mask)[0] # (6, 10, 768)
+ l_feats = l_feats.permute(0, 2, 1) # (B, 768, N_l) to make Conv1d happy
+ l_mask = l_mask.unsqueeze(dim=-1) # (batch, N_l, 1)
+ ##########################
+ features = self.backbone(x, l_feats, l_mask)
+ x_c1, x_c2, x_c3, x_c4 = features
+ x = self.classifier(x_c4, x_c3, x_c2, x_c1)
+ x = F.interpolate(x, size=input_shape, mode='bilinear', align_corners=True)
+
+ return x
+
+
+class LAVTOne(_LAVTOneSimpleDecode):
+ pass
diff --git a/lib/backbone.py b/lib/backbone.py
new file mode 100644
index 0000000000000000000000000000000000000000..275b275a400ece3d71bbf85e243b1b2ffe816460
--- /dev/null
+++ b/lib/backbone.py
@@ -0,0 +1,719 @@
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.utils.checkpoint as checkpoint
+import numpy as np
+from timm.models.layers import DropPath, to_2tuple, trunc_normal_
+from .mmcv_custom import load_checkpoint
+from mmseg.utils import get_root_logger
+
+
+class Mlp(nn.Module):
+ """ Multilayer perceptron."""
+
+ def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
+ super().__init__()
+ out_features = out_features or in_features
+ hidden_features = hidden_features or in_features
+ self.fc1 = nn.Linear(in_features, hidden_features)
+ self.act = act_layer()
+ self.fc2 = nn.Linear(hidden_features, out_features)
+ self.drop = nn.Dropout(drop)
+
+ def forward(self, x):
+ x = self.fc1(x)
+ x = self.act(x)
+ x = self.drop(x)
+ x = self.fc2(x)
+ x = self.drop(x)
+ return x
+
+
+def window_partition(x, window_size):
+ """
+ Args:
+ x: (B, H, W, C)
+ window_size (int): window size
+
+ Returns:
+ windows: (num_windows*B, window_size, window_size, C)
+ """
+ B, H, W, C = x.shape
+ x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
+ windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
+ return windows
+
+
+def window_reverse(windows, window_size, H, W):
+ """
+ Args:
+ windows: (num_windows*B, window_size, window_size, C)
+ window_size (int): Window size
+ H (int): Height of image
+ W (int): Width of image
+
+ Returns:
+ x: (B, H, W, C)
+ """
+ B = int(windows.shape[0] / (H * W / window_size / window_size))
+ x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
+ x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
+ return x
+
+
+class WindowAttention(nn.Module):
+ """ Window based multi-head self attention (W-MSA) module with relative position bias.
+ It supports both of shifted and non-shifted window.
+
+ Args:
+ dim (int): Number of input channels.
+ window_size (tuple[int]): The height and width of the window.
+ num_heads (int): Number of attention heads.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
+ attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
+ proj_drop (float, optional): Dropout ratio of output. Default: 0.0
+ """
+
+ def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
+
+ super().__init__()
+ self.dim = dim
+ self.window_size = window_size # Wh, Ww
+ self.num_heads = num_heads
+ head_dim = dim // num_heads
+ self.scale = qk_scale or head_dim ** -0.5
+
+ # define a parameter table of relative position bias
+ self.relative_position_bias_table = nn.Parameter(
+ torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
+
+ # get pair-wise relative position index for each token inside the window
+ coords_h = torch.arange(self.window_size[0])
+ coords_w = torch.arange(self.window_size[1])
+ coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
+ coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
+ relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
+ relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
+ relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
+ relative_coords[:, :, 1] += self.window_size[1] - 1
+ relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
+ relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
+ self.register_buffer("relative_position_index", relative_position_index)
+
+ self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
+ self.attn_drop = nn.Dropout(attn_drop)
+ self.proj = nn.Linear(dim, dim)
+ self.proj_drop = nn.Dropout(proj_drop)
+
+ trunc_normal_(self.relative_position_bias_table, std=.02)
+ self.softmax = nn.Softmax(dim=-1)
+
+ def forward(self, x, mask=None):
+ """ Forward function.
+
+ Args:
+ x: input features with shape of (num_windows*B, N, C)
+ mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
+ """
+ B_, N, C = x.shape
+ qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
+ q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
+ q = q * self.scale
+ attn = (q @ k.transpose(-2, -1))
+ relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
+ self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
+ relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
+ attn = attn + relative_position_bias.unsqueeze(0)
+
+ if mask is not None:
+ nW = mask.shape[0]
+ attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
+ attn = attn.view(-1, self.num_heads, N, N)
+ attn = self.softmax(attn)
+ else:
+ attn = self.softmax(attn)
+
+ attn = self.attn_drop(attn)
+
+ x = (attn @ v).transpose(1, 2).reshape(B_, N, C) # cat op
+ x = self.proj(x)
+ x = self.proj_drop(x)
+ return x
+
+
+class SwinTransformerBlock(nn.Module):
+ """ Swin Transformer Block.
+
+ Args:
+ dim (int): Number of input channels.
+ num_heads (int): Number of attention heads.
+ window_size (int): Window size.
+ shift_size (int): Shift size for SW-MSA.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float, optional): Stochastic depth rate. Default: 0.0
+ act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(self, dim, num_heads, window_size=7, shift_size=0,
+ mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
+ act_layer=nn.GELU, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.dim = dim
+ self.num_heads = num_heads
+ self.window_size = window_size
+ self.shift_size = shift_size
+ self.mlp_ratio = mlp_ratio
+ assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
+
+ self.norm1 = norm_layer(dim)
+ self.attn = WindowAttention(
+ dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
+ qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
+
+ self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
+ self.norm2 = norm_layer(dim)
+ mlp_hidden_dim = int(dim * mlp_ratio)
+ self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
+
+ self.H = None
+ self.W = None
+
+ def forward(self, x, mask_matrix):
+ """ Forward function.
+
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ mask_matrix: Attention mask for cyclic shift.
+ """
+ B, L, C = x.shape
+ H, W = self.H, self.W
+ assert L == H * W, "input feature has wrong size"
+
+ shortcut = x
+ x = self.norm1(x)
+ x = x.view(B, H, W, C)
+
+ # pad feature maps to multiples of window size
+ pad_l = pad_t = 0
+ pad_r = (self.window_size - W % self.window_size) % self.window_size
+ pad_b = (self.window_size - H % self.window_size) % self.window_size
+ x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
+ _, Hp, Wp, _ = x.shape
+
+ # cyclic shift
+ if self.shift_size > 0:
+ shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
+ attn_mask = mask_matrix
+ else:
+ shifted_x = x
+ attn_mask = None
+
+ # partition windows
+ x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
+ x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
+
+ # W-MSA/SW-MSA
+ attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
+
+ # merge windows
+ attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
+ shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
+
+ # reverse cyclic shift
+ if self.shift_size > 0:
+ x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
+ else:
+ x = shifted_x
+
+ if pad_r > 0 or pad_b > 0:
+ x = x[:, :H, :W, :].contiguous()
+
+ x = x.view(B, H * W, C)
+
+ # FFN feed-forward network
+ x = shortcut + self.drop_path(x)
+ x = x + self.drop_path(self.mlp(self.norm2(x)))
+
+ return x
+
+
+class PatchMerging(nn.Module):
+ """ Patch Merging Layer
+
+ Args:
+ dim (int): Number of input channels.
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+ def __init__(self, dim, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.dim = dim
+ self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
+ self.norm = norm_layer(4 * dim)
+
+ def forward(self, x, H, W):
+ """ Forward function.
+
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ """
+ B, L, C = x.shape
+ assert L == H * W, "input feature has wrong size"
+
+ x = x.view(B, H, W, C)
+
+ # padding
+ pad_input = (H % 2 == 1) or (W % 2 == 1)
+ if pad_input:
+ x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
+
+ x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
+ x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
+ x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
+ x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
+ x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
+ x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
+
+ x = self.norm(x)
+ x = self.reduction(x)
+
+ return x
+
+
+class PatchEmbed(nn.Module):
+ """ Image to Patch Embedding
+
+ Args:
+ patch_size (int): Patch token size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ norm_layer (nn.Module, optional): Normalization layer. Default: None
+ """
+
+ def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
+ super().__init__()
+ patch_size = to_2tuple(patch_size)
+ self.patch_size = patch_size
+
+ self.in_chans = in_chans
+ self.embed_dim = embed_dim
+
+ self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
+ if norm_layer is not None:
+ self.norm = norm_layer(embed_dim)
+ else:
+ self.norm = None
+
+ def forward(self, x):
+ """Forward function."""
+ # padding
+ _, _, H, W = x.size()
+ if W % self.patch_size[1] != 0:
+ x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
+ if H % self.patch_size[0] != 0:
+ x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
+
+ x = self.proj(x) # B C Wh Ww
+ if self.norm is not None:
+ Wh, Ww = x.size(2), x.size(3)
+ x = x.flatten(2).transpose(1, 2)
+ x = self.norm(x)
+ x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
+
+ return x
+
+
+class MultiModalSwinTransformer(nn.Module):
+ def __init__(self,
+ pretrain_img_size=224,
+ patch_size=4,
+ in_chans=3,
+ embed_dim=96,
+ depths=[2, 2, 6, 2],
+ num_heads=[3, 6, 12, 24],
+ window_size=7,
+ mlp_ratio=4.,
+ qkv_bias=True,
+ qk_scale=None,
+ drop_rate=0.,
+ attn_drop_rate=0.,
+ drop_path_rate=0.2,
+ norm_layer=nn.LayerNorm,
+ ape=False,
+ patch_norm=True,
+ out_indices=(0, 1, 2, 3),
+ frozen_stages=-1,
+ use_checkpoint=False,
+ num_heads_fusion=[1, 1, 1, 1],
+ fusion_drop=0.0
+ ):
+ super().__init__()
+
+ self.pretrain_img_size = pretrain_img_size
+ self.num_layers = len(depths)
+ self.embed_dim = embed_dim
+ self.ape = ape
+ self.patch_norm = patch_norm
+ self.out_indices = out_indices
+ self.frozen_stages = frozen_stages
+
+ # split image into non-overlapping patches
+ self.patch_embed = PatchEmbed(
+ patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
+ norm_layer=norm_layer if self.patch_norm else None)
+
+ # absolute position embedding
+ if self.ape:
+ pretrain_img_size = to_2tuple(pretrain_img_size)
+ patch_size = to_2tuple(patch_size)
+ patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]]
+
+ self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]))
+ trunc_normal_(self.absolute_pos_embed, std=.02)
+
+ self.pos_drop = nn.Dropout(p=drop_rate)
+
+ # stochastic depth
+ dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
+
+ # build layers
+ self.layers = nn.ModuleList()
+ for i_layer in range(self.num_layers):
+ layer = MMBasicLayer(
+ dim=int(embed_dim * 2 ** i_layer),
+ depth=depths[i_layer],
+ num_heads=num_heads[i_layer],
+ window_size=window_size,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop=drop_rate,
+ attn_drop=attn_drop_rate,
+ drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
+ norm_layer=norm_layer,
+ downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
+ use_checkpoint=use_checkpoint,
+ num_heads_fusion=num_heads_fusion[i_layer],
+ fusion_drop=fusion_drop
+ )
+ self.layers.append(layer)
+
+ num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
+ self.num_features = num_features
+
+ # add a norm layer for each output
+ for i_layer in out_indices:
+ layer = norm_layer(num_features[i_layer])
+ layer_name = f'norm{i_layer}'
+ self.add_module(layer_name, layer)
+
+ self._freeze_stages()
+
+ def _freeze_stages(self):
+ if self.frozen_stages >= 0:
+ self.patch_embed.eval()
+ for param in self.patch_embed.parameters():
+ param.requires_grad = False
+
+ if self.frozen_stages >= 1 and self.ape:
+ self.absolute_pos_embed.requires_grad = False
+
+ if self.frozen_stages >= 2:
+ self.pos_drop.eval()
+ for i in range(0, self.frozen_stages - 1):
+ m = self.layers[i]
+ m.eval()
+ for param in m.parameters():
+ param.requires_grad = False
+
+ def init_weights(self, pretrained=None):
+ """Initialize the weights in backbone.
+
+ Args:
+ pretrained (str, optional): Path to pre-trained weights.
+ Defaults to None.
+ """
+
+ def _init_weights(m):
+ if isinstance(m, nn.Linear):
+ trunc_normal_(m.weight, std=.02)
+ if isinstance(m, nn.Linear) and m.bias is not None:
+ nn.init.constant_(m.bias, 0)
+ elif isinstance(m, nn.LayerNorm):
+ nn.init.constant_(m.bias, 0)
+ nn.init.constant_(m.weight, 1.0)
+
+ if isinstance(pretrained, str):
+ self.apply(_init_weights)
+ logger = get_root_logger()
+ load_checkpoint(self, pretrained, strict=('upernet' in pretrained), logger=logger)
+ elif pretrained is None:
+ self.apply(_init_weights)
+ else:
+ raise TypeError('pretrained must be a str or None')
+
+ def forward(self, x, l, l_mask):
+ """Forward function."""
+ x = self.patch_embed(x)
+
+ Wh, Ww = x.size(2), x.size(3)
+ if self.ape:
+ # interpolate the position embedding to the corresponding size
+ absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
+ x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
+ else:
+ x = x.flatten(2).transpose(1, 2)
+ x = self.pos_drop(x)
+
+ outs = []
+ for i in range(self.num_layers):
+ layer = self.layers[i]
+ x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww, l, l_mask)
+
+ if i in self.out_indices:
+ norm_layer = getattr(self, f'norm{i}')
+ x_out = norm_layer(x_out) # output of a Block has shape (B, H*W, dim)
+
+ out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
+ outs.append(out)
+
+ return tuple(outs)
+
+ def train(self, mode=True):
+ """Convert the model into training mode while keep layers freezed."""
+ super(MultiModalSwinTransformer, self).train(mode)
+ self._freeze_stages()
+
+
+class MMBasicLayer(nn.Module):
+ def __init__(self,
+ dim,
+ depth,
+ num_heads,
+ window_size=7,
+ mlp_ratio=4.,
+ qkv_bias=True,
+ qk_scale=None,
+ drop=0.,
+ attn_drop=0.,
+ drop_path=0.,
+ norm_layer=nn.LayerNorm,
+ downsample=None,
+ use_checkpoint=False,
+ num_heads_fusion=1,
+ fusion_drop=0.0
+ ):
+ super().__init__()
+ self.window_size = window_size
+ self.shift_size = window_size // 2
+ self.depth = depth
+ self.use_checkpoint = use_checkpoint
+ self.dim = dim
+
+ # build blocks
+ self.blocks = nn.ModuleList([
+ SwinTransformerBlock(
+ dim=dim,
+ num_heads=num_heads,
+ window_size=window_size,
+ shift_size=0 if (i % 2 == 0) else window_size // 2,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop=drop,
+ attn_drop=attn_drop,
+ drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
+ norm_layer=norm_layer)
+ for i in range(depth)])
+
+ # fuse before downsampling
+ self.fusion = PWAM(dim, # both the visual input and for combining, num of channels
+ dim, # v_in
+ 768, # l_in
+ dim, # key
+ dim, # value
+ num_heads=num_heads_fusion,
+ dropout=fusion_drop)
+
+ self.res_gate = nn.Sequential(
+ nn.Linear(dim, dim, bias=False),
+ nn.ReLU(),
+ nn.Linear(dim, dim, bias=False),
+ nn.Tanh()
+ )
+ # patch merging layer
+ if downsample is not None:
+ self.downsample = downsample(dim=dim, norm_layer=norm_layer)
+ else:
+ self.downsample = None
+ # initialize the gate to 0
+ nn.init.zeros_(self.res_gate[0].weight)
+ nn.init.zeros_(self.res_gate[2].weight)
+
+ def forward(self, x, H, W, l, l_mask):
+ """ Forward function.
+
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ """
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ for blk in self.blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
+
+ # PWAM fusion
+ x_residual = self.fusion(x, l, l_mask)
+ # apply a gate on the residual
+ x = x + (self.res_gate(x_residual) * x_residual)
+
+ if self.downsample is not None:
+ x_down = self.downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x_residual, H, W, x_down, Wh, Ww
+ else:
+ return x_residual, H, W, x, H, W
+
+
+class PWAM(nn.Module):
+ def __init__(self, dim, v_in_channels, l_in_channels, key_channels, value_channels, num_heads=0, dropout=0.0):
+ super(PWAM, self).__init__()
+ # input x shape: (B, H*W, dim)
+ self.vis_project = nn.Sequential(nn.Conv1d(dim, dim, 1, 1), # the init function sets bias to 0 if bias is True
+ nn.GELU(),
+ nn.Dropout(dropout)
+ )
+
+ self.image_lang_att = SpatialImageLanguageAttention(v_in_channels, # v_in
+ l_in_channels, # l_in
+ key_channels, # key
+ value_channels, # value
+ out_channels=value_channels, # out
+ num_heads=num_heads)
+
+ self.project_mm = nn.Sequential(nn.Conv1d(value_channels, value_channels, 1, 1),
+ nn.GELU(),
+ nn.Dropout(dropout)
+ )
+
+ def forward(self, x, l, l_mask):
+ # input x shape: (B, H*W, dim)
+ vis = self.vis_project(x.permute(0, 2, 1)) # (B, dim, H*W)
+
+ #lang = self.image_lang_att(x, l.permute(0,2,1), l_mask) # (B, H*W, dim)
+ lang = self.image_lang_att(x, l, l_mask) # (B, H*W, dim)
+
+ lang = lang.permute(0, 2, 1) # (B, dim, H*W)
+
+ mm = torch.mul(vis, lang)
+ mm = self.project_mm(mm) # (B, dim, H*W)
+
+ mm = mm.permute(0, 2, 1) # (B, H*W, dim)
+
+ return mm
+
+
+class SpatialImageLanguageAttention(nn.Module):
+ def __init__(self, v_in_channels, l_in_channels, key_channels, value_channels, out_channels=None, num_heads=1):
+ super(SpatialImageLanguageAttention, self).__init__()
+ # x shape: (B, H*W, v_in_channels)
+ # l input shape: (B, l_in_channels, N_l)
+ # l_mask shape: (B, N_l, 1)
+ self.v_in_channels = v_in_channels
+ self.l_in_channels = l_in_channels
+ self.out_channels = out_channels
+ self.key_channels = key_channels
+ self.value_channels = value_channels
+ self.num_heads = num_heads
+ if out_channels is None:
+ self.out_channels = self.value_channels
+
+ # Keys: language features: (B, l_in_channels, #words)
+ # avoid any form of spatial normalization because a sentence contains many padding 0s
+ self.f_key = nn.Sequential(
+ nn.Conv1d(self.l_in_channels, self.key_channels, kernel_size=1, stride=1),
+ )
+
+ # Queries: visual features: (B, H*W, v_in_channels)
+ self.f_query = nn.Sequential(
+ nn.Conv1d(self.v_in_channels, self.key_channels, kernel_size=1, stride=1),
+ nn.InstanceNorm1d(self.key_channels),
+ )
+
+ # Values: language features: (B, l_in_channels, #words)
+ self.f_value = nn.Sequential(
+ nn.Conv1d(self.l_in_channels, self.value_channels, kernel_size=1, stride=1),
+ )
+
+ # Out projection
+ self.W = nn.Sequential(
+ nn.Conv1d(self.value_channels, self.out_channels, kernel_size=1, stride=1),
+ nn.InstanceNorm1d(self.out_channels),
+ )
+
+ def forward(self, x, l, l_mask):
+ # x shape: (B, H*W, v_in_channels)
+ # l input shape: (B, l_in_channels, N_l)
+ # l_mask shape: (B, N_l, 1)
+ B, HW = x.size(0), x.size(1)
+ x = x.permute(0, 2, 1) # (B, key_channels, H*W)
+ l_mask = l_mask.permute(0, 2, 1) # (B, N_l, 1) -> (B, 1, N_l)
+
+ query = self.f_query(x) # (B, key_channels, H*W) if Conv1D
+ query = query.permute(0, 2, 1) # (B, H*W, key_channels)
+ key = self.f_key(l) # (B, key_channels, N_l)
+ value = self.f_value(l) # (B, self.value_channels, N_l)
+ key = key * l_mask # (B, key_channels, N_l)
+ value = value * l_mask # (B, self.value_channels, N_l)
+ n_l = value.size(-1)
+ query = query.reshape(B, HW, self.num_heads, self.key_channels//self.num_heads).permute(0, 2, 1, 3)
+ # (b, num_heads, H*W, self.key_channels//self.num_heads)
+ key = key.reshape(B, self.num_heads, self.key_channels//self.num_heads, n_l)
+ # (b, num_heads, self.key_channels//self.num_heads, n_l)
+ value = value.reshape(B, self.num_heads, self.value_channels//self.num_heads, n_l)
+ # # (b, num_heads, self.value_channels//self.num_heads, n_l)
+ l_mask = l_mask.unsqueeze(1) # (b, 1, 1, n_l)
+
+ sim_map = torch.matmul(query, key) # (B, self.num_heads, H*W, N_l)
+ sim_map = (self.key_channels ** -.5) * sim_map # scaled dot product
+
+ sim_map = sim_map + (1e4*l_mask - 1e4) # assign a very small number to padding positions
+ sim_map = F.softmax(sim_map, dim=-1) # (B, num_heads, h*w, N_l)
+ out = torch.matmul(sim_map, value.permute(0, 1, 3, 2)) # (B, num_heads, H*W, self.value_channels//num_heads)
+ out = out.permute(0, 2, 1, 3).contiguous().reshape(B, HW, self.value_channels) # (B, H*W, value_channels)
+ out = out.permute(0, 2, 1) # (B, value_channels, HW)
+ out = self.W(out) # (B, value_channels, HW)
+ out = out.permute(0, 2, 1) # (B, HW, value_channels)
+
+ return out
diff --git a/lib/backbone_ppm.py b/lib/backbone_ppm.py
new file mode 100644
index 0000000000000000000000000000000000000000..c66045038e8f9a310968c3e86dabb5487ae318ad
--- /dev/null
+++ b/lib/backbone_ppm.py
@@ -0,0 +1,833 @@
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.utils.checkpoint as checkpoint
+import numpy as np
+from timm.models.layers import DropPath, to_2tuple, trunc_normal_
+from .mmcv_custom import load_checkpoint
+from mmseg.utils import get_root_logger
+
+
+class Mlp(nn.Module):
+ """ Multilayer perceptron."""
+
+ def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
+ super().__init__()
+ out_features = out_features or in_features
+ hidden_features = hidden_features or in_features
+ self.fc1 = nn.Linear(in_features, hidden_features)
+ self.act = act_layer()
+ self.fc2 = nn.Linear(hidden_features, out_features)
+ self.drop = nn.Dropout(drop)
+
+ def forward(self, x):
+ x = self.fc1(x)
+ x = self.act(x)
+ x = self.drop(x)
+ x = self.fc2(x)
+ x = self.drop(x)
+ return x
+
+
+def window_partition(x, window_size):
+ """
+ Args:
+ x: (B, H, W, C)
+ window_size (int): window size
+
+ Returns:
+ windows: (num_windows*B, window_size, window_size, C)
+ """
+ B, H, W, C = x.shape
+ x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
+ windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
+ return windows
+
+
+def window_reverse(windows, window_size, H, W):
+ """
+ Args:
+ windows: (num_windows*B, window_size, window_size, C)
+ window_size (int): Window size
+ H (int): Height of image
+ W (int): Width of image
+
+ Returns:
+ x: (B, H, W, C)
+ """
+ B = int(windows.shape[0] / (H * W / window_size / window_size))
+ x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
+ x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
+ return x
+
+
+class WindowAttention(nn.Module):
+ """ Window based multi-head self attention (W-MSA) module with relative position bias.
+ It supports both of shifted and non-shifted window.
+
+ Args:
+ dim (int): Number of input channels.
+ window_size (tuple[int]): The height and width of the window.
+ num_heads (int): Number of attention heads.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
+ attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
+ proj_drop (float, optional): Dropout ratio of output. Default: 0.0
+ """
+
+ def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
+
+ super().__init__()
+ self.dim = dim
+ self.window_size = window_size # Wh, Ww
+ self.num_heads = num_heads
+ head_dim = dim // num_heads
+ self.scale = qk_scale or head_dim ** -0.5
+
+ # define a parameter table of relative position bias
+ self.relative_position_bias_table = nn.Parameter(
+ torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
+
+ # get pair-wise relative position index for each token inside the window
+ coords_h = torch.arange(self.window_size[0])
+ coords_w = torch.arange(self.window_size[1])
+ coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
+ coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
+ relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
+ relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
+ relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
+ relative_coords[:, :, 1] += self.window_size[1] - 1
+ relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
+ relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
+ self.register_buffer("relative_position_index", relative_position_index)
+
+ self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
+ self.attn_drop = nn.Dropout(attn_drop)
+ self.proj = nn.Linear(dim, dim)
+ self.proj_drop = nn.Dropout(proj_drop)
+
+ trunc_normal_(self.relative_position_bias_table, std=.02)
+ self.softmax = nn.Softmax(dim=-1)
+
+ def forward(self, x, mask=None):
+ """ Forward function.
+
+ Args:
+ x: input features with shape of (num_windows*B, N, C)
+ mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
+ """
+ B_, N, C = x.shape
+ qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
+ q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
+ q = q * self.scale
+ attn = (q @ k.transpose(-2, -1))
+ relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
+ self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
+ relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
+ attn = attn + relative_position_bias.unsqueeze(0)
+
+ if mask is not None:
+ nW = mask.shape[0]
+ attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
+ attn = attn.view(-1, self.num_heads, N, N)
+ attn = self.softmax(attn)
+ else:
+ attn = self.softmax(attn)
+
+ attn = self.attn_drop(attn)
+
+ x = (attn @ v).transpose(1, 2).reshape(B_, N, C) # cat op
+ x = self.proj(x)
+ x = self.proj_drop(x)
+ return x
+
+
+class SwinTransformerBlock(nn.Module):
+ """ Swin Transformer Block.
+
+ Args:
+ dim (int): Number of input channels.
+ num_heads (int): Number of attention heads.
+ window_size (int): Window size.
+ shift_size (int): Shift size for SW-MSA.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float, optional): Stochastic depth rate. Default: 0.0
+ act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(self, dim, num_heads, window_size=7, shift_size=0,
+ mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
+ act_layer=nn.GELU, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.dim = dim
+ self.num_heads = num_heads
+ self.window_size = window_size
+ self.shift_size = shift_size
+ self.mlp_ratio = mlp_ratio
+ assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
+
+ self.norm1 = norm_layer(dim)
+ self.attn = WindowAttention(
+ dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
+ qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
+
+ self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
+ self.norm2 = norm_layer(dim)
+ mlp_hidden_dim = int(dim * mlp_ratio)
+ self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
+
+ self.H = None
+ self.W = None
+
+ def forward(self, x, mask_matrix):
+ """ Forward function.
+
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ mask_matrix: Attention mask for cyclic shift.
+ """
+ B, L, C = x.shape
+ H, W = self.H, self.W
+ assert L == H * W, "input feature has wrong size"
+
+ shortcut = x
+ x = self.norm1(x)
+ x = x.view(B, H, W, C)
+
+ # pad feature maps to multiples of window size
+ pad_l = pad_t = 0
+ pad_r = (self.window_size - W % self.window_size) % self.window_size
+ pad_b = (self.window_size - H % self.window_size) % self.window_size
+ x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
+ _, Hp, Wp, _ = x.shape
+
+ # cyclic shift
+ if self.shift_size > 0:
+ shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
+ attn_mask = mask_matrix
+ else:
+ shifted_x = x
+ attn_mask = None
+
+ # partition windows
+ x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
+ x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
+
+ # W-MSA/SW-MSA
+ attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
+
+ # merge windows
+ attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
+ shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
+
+ # reverse cyclic shift
+ if self.shift_size > 0:
+ x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
+ else:
+ x = shifted_x
+
+ if pad_r > 0 or pad_b > 0:
+ x = x[:, :H, :W, :].contiguous()
+
+ x = x.view(B, H * W, C)
+
+ # FFN feed-forward network
+ x = shortcut + self.drop_path(x)
+ x = x + self.drop_path(self.mlp(self.norm2(x)))
+
+ return x
+
+
+class PatchMerging(nn.Module):
+ """ Patch Merging Layer
+
+ Args:
+ dim (int): Number of input channels.
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+ def __init__(self, dim, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.dim = dim
+ self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
+ self.norm = norm_layer(4 * dim)
+
+ def forward(self, x, H, W):
+ """ Forward function.
+
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ """
+ B, L, C = x.shape
+ assert L == H * W, "input feature has wrong size"
+
+ x = x.view(B, H, W, C)
+
+ # padding
+ pad_input = (H % 2 == 1) or (W % 2 == 1)
+ if pad_input:
+ x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
+
+ x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
+ x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
+ x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
+ x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
+ x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
+ x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
+
+ x = self.norm(x)
+ x = self.reduction(x)
+
+ return x
+
+
+class PatchEmbed(nn.Module):
+ """ Image to Patch Embedding
+
+ Args:
+ patch_size (int): Patch token size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ norm_layer (nn.Module, optional): Normalization layer. Default: None
+ """
+
+ def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
+ super().__init__()
+ patch_size = to_2tuple(patch_size)
+ self.patch_size = patch_size
+
+ self.in_chans = in_chans
+ self.embed_dim = embed_dim
+
+ self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
+ if norm_layer is not None:
+ self.norm = norm_layer(embed_dim)
+ else:
+ self.norm = None
+
+ def forward(self, x):
+ """Forward function."""
+ # padding
+ _, _, H, W = x.size()
+ if W % self.patch_size[1] != 0:
+ x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
+ if H % self.patch_size[0] != 0:
+ x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
+
+ x = self.proj(x) # B C Wh Ww
+ if self.norm is not None:
+ Wh, Ww = x.size(2), x.size(3)
+ x = x.flatten(2).transpose(1, 2)
+ x = self.norm(x)
+ x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
+
+ return x
+
+
+class MultiModalSwinTransformer(nn.Module):
+ def __init__(self,
+ pretrain_img_size=224,
+ patch_size=4,
+ in_chans=3,
+ embed_dim=96,
+ depths=[2, 2, 6, 2],
+ num_heads=[3, 6, 12, 24],
+ window_size=7,
+ mlp_ratio=4.,
+ qkv_bias=True,
+ qk_scale=None,
+ drop_rate=0.,
+ attn_drop_rate=0.,
+ drop_path_rate=0.2,
+ norm_layer=nn.LayerNorm,
+ ape=False,
+ patch_norm=True,
+ out_indices=(0, 1, 2, 3),
+ frozen_stages=-1,
+ use_checkpoint=False,
+ num_heads_fusion=[1, 1, 1, 1],
+ fusion_drop=0.0
+ ):
+ super().__init__()
+
+ self.pretrain_img_size = pretrain_img_size
+ self.num_layers = len(depths)
+ self.embed_dim = embed_dim
+ self.ape = ape
+ self.patch_norm = patch_norm
+ self.out_indices = out_indices
+ self.frozen_stages = frozen_stages
+
+ # split image into non-overlapping patches
+ self.patch_embed = PatchEmbed(
+ patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
+ norm_layer=norm_layer if self.patch_norm else None)
+
+ # absolute position embedding
+ if self.ape:
+ pretrain_img_size = to_2tuple(pretrain_img_size)
+ patch_size = to_2tuple(patch_size)
+ patches_resolution = [pretrain_img_size[0] // patch_size[0], pretrain_img_size[1] // patch_size[1]]
+
+ self.absolute_pos_embed = nn.Parameter(torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1]))
+ trunc_normal_(self.absolute_pos_embed, std=.02)
+
+ self.pos_drop = nn.Dropout(p=drop_rate)
+
+ # stochastic depth
+ dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
+
+ # build layers
+ self.layers = nn.ModuleList()
+ for i_layer in range(self.num_layers):
+ layer = MMBasicLayer(
+ dim=int(embed_dim * 2 ** i_layer),
+ depth=depths[i_layer],
+ num_heads=num_heads[i_layer],
+ window_size=window_size,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop=drop_rate,
+ attn_drop=attn_drop_rate,
+ drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
+ norm_layer=norm_layer,
+ downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
+ use_checkpoint=use_checkpoint,
+ num_heads_fusion=num_heads_fusion[i_layer],
+ fusion_drop=fusion_drop
+ )
+ self.layers.append(layer)
+
+ num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
+ self.num_features = num_features
+
+ # add a norm layer for each output
+ for i_layer in out_indices:
+ layer = norm_layer(num_features[i_layer])
+ layer_name = f'norm{i_layer}'
+ self.add_module(layer_name, layer)
+
+ self._freeze_stages()
+
+ def _freeze_stages(self):
+ if self.frozen_stages >= 0:
+ self.patch_embed.eval()
+ for param in self.patch_embed.parameters():
+ param.requires_grad = False
+
+ if self.frozen_stages >= 1 and self.ape:
+ self.absolute_pos_embed.requires_grad = False
+
+ if self.frozen_stages >= 2:
+ self.pos_drop.eval()
+ for i in range(0, self.frozen_stages - 1):
+ m = self.layers[i]
+ m.eval()
+ for param in m.parameters():
+ param.requires_grad = False
+
+ def init_weights(self, pretrained=None):
+ """Initialize the weights in backbone.
+
+ Args:
+ pretrained (str, optional): Path to pre-trained weights.
+ Defaults to None.
+ """
+
+ def _init_weights(m):
+ if isinstance(m, nn.Linear):
+ trunc_normal_(m.weight, std=.02)
+ if isinstance(m, nn.Linear) and m.bias is not None:
+ nn.init.constant_(m.bias, 0)
+ elif isinstance(m, nn.LayerNorm):
+ nn.init.constant_(m.bias, 0)
+ nn.init.constant_(m.weight, 1.0)
+
+ if isinstance(pretrained, str):
+ self.apply(_init_weights)
+ logger = get_root_logger()
+ load_checkpoint(self, pretrained, strict=('upernet' in pretrained), logger=logger)
+ elif pretrained is None:
+ self.apply(_init_weights)
+ else:
+ raise TypeError('pretrained must be a str or None')
+
+ def forward(self, x, l, l_mask):
+ """Forward function."""
+ x = self.patch_embed(x)
+
+ Wh, Ww = x.size(2), x.size(3)
+ if self.ape:
+ # interpolate the position embedding to the corresponding size
+ absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
+ x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
+ else:
+ x = x.flatten(2).transpose(1, 2)
+ x = self.pos_drop(x)
+
+ outs = []
+ for i in range(self.num_layers):
+ layer = self.layers[i]
+ x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww, l, l_mask)
+
+ if i in self.out_indices:
+ norm_layer = getattr(self, f'norm{i}')
+ x_out = norm_layer(x_out) # output of a Block has shape (B, H*W, dim)
+
+ out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
+ outs.append(out)
+
+ return tuple(outs)
+
+ def train(self, mode=True):
+ """Convert the model into training mode while keep layers freezed."""
+ super(MultiModalSwinTransformer, self).train(mode)
+ self._freeze_stages()
+
+class LayerNorm(nn.Module):
+ r""" LayerNorm that supports two data formats: channels_last (default) or channels_first.
+ The ordering of the dimensions in the inputs. channels_last corresponds to inputs with
+ shape (batch_size, height, width, channels) while channels_first corresponds to inputs
+ with shape (batch_size, channels, height, width).
+ """
+ def __init__(self, normalized_shape, eps=1e-6, data_format="channels_first"):
+ super().__init__()
+ self.weight = nn.Parameter(torch.ones(normalized_shape))
+ self.bias = nn.Parameter(torch.zeros(normalized_shape))
+ self.eps = eps
+ self.data_format = data_format
+ if self.data_format not in ["channels_last", "channels_first"]:
+ raise NotImplementedError
+ self.normalized_shape = (normalized_shape, )
+
+ def forward(self, x):
+ if self.data_format == "channels_last":
+ return F.layer_norm(x, self.normalized_shape, self.weight, self.bias, self.eps)
+ elif self.data_format == "channels_first":
+ u = x.mean(1, keepdim=True)
+ s = (x - u).pow(2).mean(1, keepdim=True)
+ x = (x - u) / torch.sqrt(s + self.eps)
+ x = self.weight[:, None, None] * x + self.bias[:, None, None]
+ return x
+
+
+class MMBasicLayer(nn.Module):
+ def __init__(self,
+ dim,
+ depth,
+ num_heads,
+ window_size=7,
+ mlp_ratio=4.,
+ qkv_bias=True,
+ qk_scale=None,
+ drop=0.,
+ attn_drop=0.,
+ drop_path=0.,
+ norm_layer=nn.LayerNorm,
+ downsample=None,
+ use_checkpoint=False,
+ num_heads_fusion=1,
+ fusion_drop=0.0
+ ):
+ super().__init__()
+ self.window_size = window_size
+ self.shift_size = window_size // 2
+ self.depth = depth
+ self.use_checkpoint = use_checkpoint
+ self.dim = dim
+
+ # build blocks
+ self.blocks = nn.ModuleList([
+ SwinTransformerBlock(
+ dim=dim,
+ num_heads=num_heads,
+ window_size=window_size,
+ shift_size=0 if (i % 2 == 0) else window_size // 2,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop=drop,
+ attn_drop=attn_drop,
+ drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
+ norm_layer=norm_layer)
+ for i in range(depth)])
+
+ # fuse before downsampling
+ self.fusion = PWAM(dim, # both the visual input and for combining, num of channels
+ dim, # v_in
+ 768, # l_in
+ dim, # key
+ dim, # value
+ num_heads=num_heads_fusion,
+ dropout=fusion_drop)
+
+ self.res_gate = nn.Sequential(
+ nn.Linear(dim, dim, bias=False),
+ nn.GELU(),
+ nn.Linear(dim, dim, bias=False),
+ nn.Tanh()
+ )
+
+ self.psizes = [1,2,3,6]
+ reduction_dim = dim // 4
+ self.pyramids = nn.ModuleList()
+ self.fusions = nn.ModuleList()
+ self.mixer = nn.Sequential(
+ nn.Linear(dim*2, dim),
+ nn.LayerNorm(dim),
+ nn.Linear(dim, dim),
+ nn.GELU()
+ )
+
+ #self.res_gates = nn.ModuleList()
+ for p in self.psizes:
+ self.pyramids.append(
+ #nn.Sequential(
+ # #nn.AdaptiveAvgPool2d(p),
+ # nn.Conv2d(dim, reduction_dim, kernel_size=p, padding=p//2, bias=False),
+ # nn.BatchNorm2d(reduction_dim),
+ # nn.ReLU(inplace=True)
+ #)
+ nn.Sequential(
+ nn.AdaptiveAvgPool2d(p),
+ nn.Conv2d(dim, dim*4, kernel_size=1, bias=False),
+ #nn.BatchNorm2d(reduction_dim),
+ LayerNorm(dim*4),
+ nn.Conv2d(dim*4, dim, kernel_size=1, bias=False),
+ nn.GELU(),
+ nn.Conv2d(dim, reduction_dim, kernel_size=1, bias=False),
+ )
+ )
+ self.fusions.append(
+ PWAM(reduction_dim, # both the visual input and for combining, num of channels
+ reduction_dim, # v_in
+ 768, # l_in
+ reduction_dim, # key
+ reduction_dim, # value
+ num_heads=num_heads_fusion,
+ dropout=fusion_drop)
+ )
+ self.reduction_dim = reduction_dim
+ #self.res_gates.append(
+ # nn.Sequential(
+ # nn.Linear(reduction_dim, reduction_dim, bias=False),
+ # nn.ReLU(),
+ # nn.Linear(reduction_dim, reduction_dim, bias=False),
+ # nn.Tanh()
+ # )
+ #)
+
+
+ # patch merging layer
+ if downsample is not None:
+ self.downsample = downsample(dim=dim, norm_layer=norm_layer)
+ else:
+ self.downsample = None
+ # initialize the gate to 0
+ nn.init.zeros_(self.res_gate[0].weight)
+ nn.init.zeros_(self.res_gate[2].weight)
+
+ def forward(self, x, H, W, l, l_mask):
+ """ Forward function.
+
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ """
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ for blk in self.blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
+
+ out = []
+
+ #torch.Size([2, 32, 1, 1])
+ #torch.Size([2, 1, 32])
+ #P3WAM
+ x_reshape = x.permute(0,2,1).view(x.shape[0], x.shape[2], H, W)
+ x_size = x_reshape.size()
+ for i, p in enumerate(self.psizes):
+ px = self.pyramids[i](x_reshape)
+ px = px.flatten(2).permute(0,2,1)
+ #print(px.shape)
+ px_residual = self.fusions[i](px, l, l_mask)
+ px_residual = px_residual.permute(0,2,1).view(x.shape[0], self.reduction_dim , p, p)
+ #print(px_residual.shape)
+ out.append(F.interpolate(px_residual, x_size[2:], mode='bilinear', align_corners=True).flatten(2).permute(0,2,1))
+
+ # PWAM fusion
+ #x_residual = self.fusion(x, l, l_mask)
+ ## apply a gate on the residual
+ #x = x + (self.res_gate(x_residual) * x_residual)
+
+ # PWAM fusion
+ x_residual = self.fusion(x, l, l_mask)
+ out.append(x_residual)
+ # apply a gate on the residual
+ x = x + (self.res_gate(x_residual) * x_residual)
+
+ #print('---')
+ #for o in out:
+ # print(o.shape)
+
+
+ x_residual = self.mixer(torch.cat(out, dim =2))
+
+ if self.downsample is not None:
+ x_down = self.downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x_residual, H, W, x_down, Wh, Ww
+ else:
+ return x_residual, H, W, x, H, W
+
+
+class PWAM(nn.Module):
+ def __init__(self, dim, v_in_channels, l_in_channels, key_channels, value_channels, num_heads=0, dropout=0.0):
+ super(PWAM, self).__init__()
+ # input x shape: (B, H*W, dim)
+ self.vis_project = nn.Sequential(nn.Conv1d(dim, dim, 1, 1), # the init function sets bias to 0 if bias is True
+ nn.GELU(),
+ nn.Dropout(dropout)
+ )
+
+ self.image_lang_att = SpatialImageLanguageAttention(v_in_channels, # v_in
+ l_in_channels, # l_in
+ key_channels, # key
+ value_channels, # value
+ out_channels=value_channels, # out
+ num_heads=num_heads)
+
+ self.project_mm = nn.Sequential(nn.Conv1d(value_channels, value_channels, 1, 1),
+ nn.GELU(),
+ nn.Dropout(dropout)
+ )
+
+ def forward(self, x, l, l_mask):
+ # input x shape: (B, H*W, dim)
+ vis = self.vis_project(x.permute(0, 2, 1)) # (B, dim, H*W)
+
+ lang = self.image_lang_att(x, l.permute(0,2,1), l_mask) # (B, H*W, dim)
+
+ lang = lang.permute(0, 2, 1) # (B, dim, H*W)
+
+ mm = torch.mul(vis, lang)
+ mm = self.project_mm(mm) # (B, dim, H*W)
+
+ mm = mm.permute(0, 2, 1) # (B, H*W, dim)
+
+ return mm
+
+
+class SpatialImageLanguageAttention(nn.Module):
+ def __init__(self, v_in_channels, l_in_channels, key_channels, value_channels, out_channels=None, num_heads=1):
+ super(SpatialImageLanguageAttention, self).__init__()
+ # x shape: (B, H*W, v_in_channels)
+ # l input shape: (B, l_in_channels, N_l)
+ # l_mask shape: (B, N_l, 1)
+ self.v_in_channels = v_in_channels
+ self.l_in_channels = l_in_channels
+ self.out_channels = out_channels
+ self.key_channels = key_channels
+ self.value_channels = value_channels
+ self.num_heads = num_heads
+ if out_channels is None:
+ self.out_channels = self.value_channels
+
+ # Keys: language features: (B, l_in_channels, #words)
+ # avoid any form of spatial normalization because a sentence contains many padding 0s
+ self.f_key = nn.Sequential(
+ nn.Conv1d(self.l_in_channels, self.key_channels, kernel_size=1, stride=1),
+ )
+
+ # Queries: visual features: (B, H*W, v_in_channels)
+ self.f_query = nn.Sequential(
+ #nn.Conv1d(self.v_in_channels, self.key_channels, kernel_size=1, stride=1),
+ #nn.InstanceNorm1d(self.key_channels),
+ #nn.Conv1d(self.v_in_channels, self.key_channels, kernel_size=1, stride=1),
+ nn.Linear(self.v_in_channels, self.key_channels),
+ #nn.InstanceNorm1d(self.key_channels),
+ nn.LayerNorm(self.key_channels),
+ )
+
+ # Values: language features: (B, l_in_channels, #words)
+ self.f_value = nn.Sequential(
+ nn.Conv1d(self.l_in_channels, self.value_channels, kernel_size=1, stride=1),
+ )
+
+ # Out projection
+ self.W = nn.Sequential(
+ #nn.Conv1d(self.value_channels, self.out_channels, kernel_size=1, stride=1),
+ #nn.InstanceNorm1d(self.out_channels),
+ #nn.Conv1d(self.value_channels, self.out_channels, kernel_size=1, stride=1),
+ nn.Linear(self.value_channels, self.out_channels),
+ #nn.InstanceNorm1d(self.out_channels),
+ nn.LayerNorm(self.out_channels),
+ )
+
+ def forward(self, x, l, l_mask):
+ # x shape: (B, H*W, v_in_channels)
+ # l input shape: (B, l_in_channels, N_l)
+ # l_mask shape: (B, N_l, 1)
+ B, HW = x.size(0), x.size(1)
+ #x = x.permute(0, 2, 1) # (B, key_channels, H*W)
+ l_mask = l_mask.permute(0, 2, 1) # (B, N_l, 1) -> (B, 1, N_l)
+
+ query = self.f_query(x).permute(0,2,1) # (B, key_channels, H*W) if Conv1D
+ query = query.permute(0, 2, 1) # (B, H*W, key_channels)
+ key = self.f_key(l) # (B, key_channels, N_l)
+ value = self.f_value(l) # (B, self.value_channels, N_l)
+ key = key * l_mask # (B, key_channels, N_l)
+ value = value * l_mask # (B, self.value_channels, N_l)
+ n_l = value.size(-1)
+ query = query.reshape(B, HW, self.num_heads, self.key_channels//self.num_heads).permute(0, 2, 1, 3)
+ # (b, num_heads, H*W, self.key_channels//self.num_heads)
+ key = key.reshape(B, self.num_heads, self.key_channels//self.num_heads, n_l)
+ # (b, num_heads, self.key_channels//self.num_heads, n_l)
+ value = value.reshape(B, self.num_heads, self.value_channels//self.num_heads, n_l)
+ # # (b, num_heads, self.value_channels//self.num_heads, n_l)
+ l_mask = l_mask.unsqueeze(1) # (b, 1, 1, n_l)
+
+ sim_map = torch.matmul(query, key) # (B, self.num_heads, H*W, N_l)
+ sim_map = (self.key_channels ** -.5) * sim_map # scaled dot product
+
+ sim_map = sim_map + (1e4*l_mask - 1e4) # assign a very small number to padding positions
+ sim_map = F.softmax(sim_map, dim=-1) # (B, num_heads, h*w, N_l)
+ out = torch.matmul(sim_map, value.permute(0, 1, 3, 2)) # (B, num_heads, H*W, self.value_channels//num_heads)
+ out = out.permute(0, 2, 1, 3).contiguous().reshape(B, HW, self.value_channels) # (B, H*W, value_channels)
+ #out = out.permute(0, 2, 1) # (B, value_channels, HW)
+ #out = self.W(out) # (B, value_channels, HW)
+ #out = out.permute(0, 2, 1) # (B, HW, value_channels)
+ out = self.W(out)
+
+ return out
diff --git a/lib/mask_predictor.py b/lib/mask_predictor.py
new file mode 100644
index 0000000000000000000000000000000000000000..bea268a17d3a39c216a48496846327cfcbed425a
--- /dev/null
+++ b/lib/mask_predictor.py
@@ -0,0 +1,72 @@
+import torch
+from torch import nn
+from torch.nn import functional as F
+from collections import OrderedDict
+
+
+class SimpleDecoding(nn.Module):
+ def __init__(self, c4_dims, factor=2):
+ super(SimpleDecoding, self).__init__()
+
+ hidden_size = c4_dims//factor
+ c4_size = c4_dims
+ c3_size = c4_dims//(factor**1)
+ c2_size = c4_dims//(factor**2)
+ c1_size = c4_dims//(factor**3)
+
+ self.conv1_4 = nn.Conv2d(c4_size+c3_size, hidden_size, 3, padding=1, bias=False)
+ self.bn1_4 = nn.BatchNorm2d(hidden_size)
+ self.relu1_4 = nn.ReLU()
+ self.conv2_4 = nn.Conv2d(hidden_size, hidden_size, 3, padding=1, bias=False)
+ self.bn2_4 = nn.BatchNorm2d(hidden_size)
+ self.relu2_4 = nn.ReLU()
+
+ self.conv1_3 = nn.Conv2d(hidden_size + c2_size, hidden_size, 3, padding=1, bias=False)
+ self.bn1_3 = nn.BatchNorm2d(hidden_size)
+ self.relu1_3 = nn.ReLU()
+ self.conv2_3 = nn.Conv2d(hidden_size, hidden_size, 3, padding=1, bias=False)
+ self.bn2_3 = nn.BatchNorm2d(hidden_size)
+ self.relu2_3 = nn.ReLU()
+
+ self.conv1_2 = nn.Conv2d(hidden_size + c1_size, hidden_size, 3, padding=1, bias=False)
+ self.bn1_2 = nn.BatchNorm2d(hidden_size)
+ self.relu1_2 = nn.ReLU()
+ self.conv2_2 = nn.Conv2d(hidden_size, hidden_size, 3, padding=1, bias=False)
+ self.bn2_2 = nn.BatchNorm2d(hidden_size)
+ self.relu2_2 = nn.ReLU()
+
+ self.conv1_1 = nn.Conv2d(hidden_size, 2, 1)
+
+ def forward(self, x_c4, x_c3, x_c2, x_c1):
+ # fuse Y4 and Y3
+ if x_c4.size(-2) < x_c3.size(-2) or x_c4.size(-1) < x_c3.size(-1):
+ x_c4 = F.interpolate(input=x_c4, size=(x_c3.size(-2), x_c3.size(-1)), mode='bilinear', align_corners=True)
+ x = torch.cat([x_c4, x_c3], dim=1)
+ x = self.conv1_4(x)
+ x = self.bn1_4(x)
+ x = self.relu1_4(x)
+ x = self.conv2_4(x)
+ x = self.bn2_4(x)
+ x = self.relu2_4(x)
+ # fuse top-down features and Y2 features
+ if x.size(-2) < x_c2.size(-2) or x.size(-1) < x_c2.size(-1):
+ x = F.interpolate(input=x, size=(x_c2.size(-2), x_c2.size(-1)), mode='bilinear', align_corners=True)
+ x = torch.cat([x, x_c2], dim=1)
+ x = self.conv1_3(x)
+ x = self.bn1_3(x)
+ x = self.relu1_3(x)
+ x = self.conv2_3(x)
+ x = self.bn2_3(x)
+ x = self.relu2_3(x)
+ # fuse top-down features and Y1 features
+ if x.size(-2) < x_c1.size(-2) or x.size(-1) < x_c1.size(-1):
+ x = F.interpolate(input=x, size=(x_c1.size(-2), x_c1.size(-1)), mode='bilinear', align_corners=True)
+ x = torch.cat([x, x_c1], dim=1)
+ x = self.conv1_2(x)
+ x = self.bn1_2(x)
+ x = self.relu1_2(x)
+ x = self.conv2_2(x)
+ x = self.bn2_2(x)
+ x = self.relu2_2(x)
+
+ return self.conv1_1(x)
diff --git a/lib/mmcv_custom/__init__.py b/lib/mmcv_custom/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..7e0e39b03e2a149c33c372472b2b814a872ec55c
--- /dev/null
+++ b/lib/mmcv_custom/__init__.py
@@ -0,0 +1,5 @@
+# -*- coding: utf-8 -*-
+
+from .checkpoint import load_checkpoint
+
+__all__ = ['load_checkpoint']
diff --git a/lib/mmcv_custom/__pycache__/__init__.cpython-37.pyc b/lib/mmcv_custom/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..acafc2c5036ce675e89bf13b36b1d28278cae041
Binary files /dev/null and b/lib/mmcv_custom/__pycache__/__init__.cpython-37.pyc differ
diff --git a/lib/mmcv_custom/__pycache__/__init__.cpython-38.pyc b/lib/mmcv_custom/__pycache__/__init__.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..d7d73dc23ca40ec8058562438872e67c99d6fbba
Binary files /dev/null and b/lib/mmcv_custom/__pycache__/__init__.cpython-38.pyc differ
diff --git a/lib/mmcv_custom/__pycache__/checkpoint.cpython-37.pyc b/lib/mmcv_custom/__pycache__/checkpoint.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..da9ccb28add811535769d786d89b0c883f529cc8
Binary files /dev/null and b/lib/mmcv_custom/__pycache__/checkpoint.cpython-37.pyc differ
diff --git a/lib/mmcv_custom/__pycache__/checkpoint.cpython-38.pyc b/lib/mmcv_custom/__pycache__/checkpoint.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..167ec991f071372ee7f5f606485bb560438c5b1f
Binary files /dev/null and b/lib/mmcv_custom/__pycache__/checkpoint.cpython-38.pyc differ
diff --git a/lib/mmcv_custom/checkpoint.py b/lib/mmcv_custom/checkpoint.py
new file mode 100644
index 0000000000000000000000000000000000000000..521820d531fe52148c7081ff04ea97bec5640cb9
--- /dev/null
+++ b/lib/mmcv_custom/checkpoint.py
@@ -0,0 +1,504 @@
+# Copyright (c) Open-MMLab. All rights reserved.
+import io
+import os
+import os.path as osp
+import pkgutil
+import time
+import warnings
+from collections import OrderedDict
+from importlib import import_module
+from tempfile import TemporaryDirectory
+
+import torch
+import torchvision
+from torch.optim import Optimizer
+from torch.utils import model_zoo
+from torch.nn import functional as F
+
+import mmcv
+from mmcv.fileio import FileClient
+from mmcv.fileio import load as load_file
+from mmcv.parallel import is_module_wrapper
+from mmcv.utils import mkdir_or_exist
+from mmcv.runner import get_dist_info
+
+ENV_MMCV_HOME = 'MMCV_HOME'
+ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME'
+DEFAULT_CACHE_DIR = '~/.cache'
+
+
+def _get_mmcv_home():
+ mmcv_home = os.path.expanduser(
+ os.getenv(
+ ENV_MMCV_HOME,
+ os.path.join(
+ os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv')))
+
+ mkdir_or_exist(mmcv_home)
+ return mmcv_home
+
+
+def load_state_dict(module, state_dict, strict=False, logger=None):
+ """Load state_dict to a module.
+
+ This method is modified from :meth:`torch.nn.Module.load_state_dict`.
+ Default value for ``strict`` is set to ``False`` and the message for
+ param mismatch will NOT be shown if strict is False.
+
+ Args:
+ module (Module): Module that receives the state_dict.
+ state_dict (OrderedDict): Weights.
+ strict (bool): whether to strictly enforce that the keys
+ in :attr:`state_dict` match the keys returned by this module's
+ :meth:`~torch.nn.Module.state_dict` function. Default: ``False``.
+ logger (:obj:`logging.Logger`, optional): Logger to log the error
+ message. If not specified, print function will be used.
+ """
+ unexpected_keys = []
+ all_missing_keys = []
+ err_msg = []
+
+ metadata = getattr(state_dict, '_metadata', None)
+ state_dict = state_dict.copy()
+ if metadata is not None:
+ state_dict._metadata = metadata
+
+ # use _load_from_state_dict to enable checkpoint version control
+ def load(module, prefix=''):
+ # recursively check parallel module in case that the model has a
+ # complicated structure, e.g., nn.Module(nn.Module(DDP))
+ if is_module_wrapper(module):
+ module = module.module
+ local_metadata = {} if metadata is None else metadata.get(
+ prefix[:-1], {})
+ module._load_from_state_dict(state_dict, prefix, local_metadata, True,
+ all_missing_keys, unexpected_keys,
+ err_msg)
+ for name, child in module._modules.items():
+ if child is not None:
+ load(child, prefix + name + '.')
+
+ load(module)
+ load = None # break load->load reference cycle
+
+ # ignore "num_batches_tracked" of BN layers
+ missing_keys = [
+ key for key in all_missing_keys if 'num_batches_tracked' not in key
+ ]
+
+ if unexpected_keys:
+ err_msg.append('unexpected key in source '
+ f'state_dict: {", ".join(unexpected_keys)}\n')
+ if missing_keys:
+ err_msg.append(
+ f'missing keys in source state_dict: {", ".join(missing_keys)}\n')
+
+ if strict:
+ rank, _ = get_dist_info()
+ if len(err_msg) > 0 and rank == 0:
+ err_msg.insert(
+ 0, 'The model and loaded state dict do not match exactly\n')
+ err_msg = '\n'.join(err_msg)
+ if strict:
+ raise RuntimeError(err_msg)
+ elif logger is not None:
+ logger.warning(err_msg)
+ else:
+ print(err_msg)
+
+
+def load_url_dist(url, model_dir=None):
+ """In distributed setting, this function only download checkpoint at local
+ rank 0."""
+ rank, world_size = get_dist_info()
+ rank = int(os.environ.get('LOCAL_RANK', rank))
+ if rank == 0:
+ checkpoint = model_zoo.load_url(url, model_dir=model_dir)
+ if world_size > 1:
+ torch.distributed.barrier()
+ if rank > 0:
+ checkpoint = model_zoo.load_url(url, model_dir=model_dir)
+ return checkpoint
+
+
+def load_pavimodel_dist(model_path, map_location=None):
+ """In distributed setting, this function only download checkpoint at local
+ rank 0."""
+ try:
+ from pavi import modelcloud
+ except ImportError:
+ raise ImportError(
+ 'Please install pavi to load checkpoint from modelcloud.')
+ rank, world_size = get_dist_info()
+ rank = int(os.environ.get('LOCAL_RANK', rank))
+ if rank == 0:
+ model = modelcloud.get(model_path)
+ with TemporaryDirectory() as tmp_dir:
+ downloaded_file = osp.join(tmp_dir, model.name)
+ model.download(downloaded_file)
+ checkpoint = torch.load(downloaded_file, map_location=map_location)
+ if world_size > 1:
+ torch.distributed.barrier()
+ if rank > 0:
+ model = modelcloud.get(model_path)
+ with TemporaryDirectory() as tmp_dir:
+ downloaded_file = osp.join(tmp_dir, model.name)
+ model.download(downloaded_file)
+ checkpoint = torch.load(
+ downloaded_file, map_location=map_location)
+ return checkpoint
+
+
+def load_fileclient_dist(filename, backend, map_location):
+ """In distributed setting, this function only download checkpoint at local
+ rank 0."""
+ rank, world_size = get_dist_info()
+ rank = int(os.environ.get('LOCAL_RANK', rank))
+ allowed_backends = ['ceph']
+ if backend not in allowed_backends:
+ raise ValueError(f'Load from Backend {backend} is not supported.')
+ if rank == 0:
+ fileclient = FileClient(backend=backend)
+ buffer = io.BytesIO(fileclient.get(filename))
+ checkpoint = torch.load(buffer, map_location=map_location)
+ if world_size > 1:
+ torch.distributed.barrier()
+ if rank > 0:
+ fileclient = FileClient(backend=backend)
+ buffer = io.BytesIO(fileclient.get(filename))
+ checkpoint = torch.load(buffer, map_location=map_location)
+ return checkpoint
+
+
+def get_torchvision_models():
+ model_urls = dict()
+ for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__):
+ if ispkg:
+ continue
+ _zoo = import_module(f'torchvision.models.{name}')
+ if hasattr(_zoo, 'model_urls'):
+ _urls = getattr(_zoo, 'model_urls')
+ model_urls.update(_urls)
+ return model_urls
+
+
+def get_external_models():
+ mmcv_home = _get_mmcv_home()
+ default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json')
+ default_urls = load_file(default_json_path)
+ assert isinstance(default_urls, dict)
+ external_json_path = osp.join(mmcv_home, 'open_mmlab.json')
+ if osp.exists(external_json_path):
+ external_urls = load_file(external_json_path)
+ assert isinstance(external_urls, dict)
+ default_urls.update(external_urls)
+
+ return default_urls
+
+
+def get_mmcls_models():
+ mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json')
+ mmcls_urls = load_file(mmcls_json_path)
+
+ return mmcls_urls
+
+
+def get_deprecated_model_names():
+ deprecate_json_path = osp.join(mmcv.__path__[0],
+ 'model_zoo/deprecated.json')
+ deprecate_urls = load_file(deprecate_json_path)
+ assert isinstance(deprecate_urls, dict)
+
+ return deprecate_urls
+
+
+def _process_mmcls_checkpoint(checkpoint):
+ state_dict = checkpoint['state_dict']
+ new_state_dict = OrderedDict()
+ for k, v in state_dict.items():
+ if k.startswith('backbone.'):
+ new_state_dict[k[9:]] = v
+ new_checkpoint = dict(state_dict=new_state_dict)
+
+ return new_checkpoint
+
+
+def _load_checkpoint(filename, map_location=None):
+ """Load checkpoint from somewhere (modelzoo, file, url).
+
+ Args:
+ filename (str): Accept local filepath, URL, ``torchvision://xxx``,
+ ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
+ details.
+ map_location (str | None): Same as :func:`torch.load`. Default: None.
+
+ Returns:
+ dict | OrderedDict: The loaded checkpoint. It can be either an
+ OrderedDict storing model weights or a dict containing other
+ information, which depends on the checkpoint.
+ """
+ if filename.startswith('modelzoo://'):
+ warnings.warn('The URL scheme of "modelzoo://" is deprecated, please '
+ 'use "torchvision://" instead')
+ model_urls = get_torchvision_models()
+ model_name = filename[11:]
+ checkpoint = load_url_dist(model_urls[model_name])
+ elif filename.startswith('torchvision://'):
+ model_urls = get_torchvision_models()
+ model_name = filename[14:]
+ checkpoint = load_url_dist(model_urls[model_name])
+ elif filename.startswith('open-mmlab://'):
+ model_urls = get_external_models()
+ model_name = filename[13:]
+ deprecated_urls = get_deprecated_model_names()
+ if model_name in deprecated_urls:
+ warnings.warn(f'open-mmlab://{model_name} is deprecated in favor '
+ f'of open-mmlab://{deprecated_urls[model_name]}')
+ model_name = deprecated_urls[model_name]
+ model_url = model_urls[model_name]
+ # check if is url
+ if model_url.startswith(('http://', 'https://')):
+ checkpoint = load_url_dist(model_url)
+ else:
+ filename = osp.join(_get_mmcv_home(), model_url)
+ if not osp.isfile(filename):
+ raise IOError(f'{filename} is not a checkpoint file')
+ checkpoint = torch.load(filename, map_location=map_location)
+ elif filename.startswith('mmcls://'):
+ model_urls = get_mmcls_models()
+ model_name = filename[8:]
+ checkpoint = load_url_dist(model_urls[model_name])
+ checkpoint = _process_mmcls_checkpoint(checkpoint)
+ elif filename.startswith(('http://', 'https://')):
+ checkpoint = load_url_dist(filename)
+ elif filename.startswith('pavi://'):
+ model_path = filename[7:]
+ checkpoint = load_pavimodel_dist(model_path, map_location=map_location)
+ elif filename.startswith('s3://'):
+ checkpoint = load_fileclient_dist(
+ filename, backend='ceph', map_location=map_location)
+ else:
+ if not osp.isfile(filename):
+ raise IOError(f'{filename} is not a checkpoint file')
+ checkpoint = torch.load(filename, map_location=map_location)
+ return checkpoint
+
+
+def load_checkpoint(model,
+ filename,
+ map_location='cpu',
+ strict=False,
+ logger=None):
+ """Load checkpoint from a file or URI.
+
+ Args:
+ model (Module): Module to load checkpoint.
+ filename (str): Accept local filepath, URL, ``torchvision://xxx``,
+ ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for
+ details.
+ map_location (str): Same as :func:`torch.load`.
+ strict (bool): Whether to allow different params for the model and
+ checkpoint.
+ logger (:mod:`logging.Logger` or None): The logger for error message.
+
+ Returns:
+ dict or OrderedDict: The loaded checkpoint.
+ """
+ checkpoint = _load_checkpoint(filename, map_location)
+ # OrderedDict is a subclass of dict
+ if not isinstance(checkpoint, dict):
+ raise RuntimeError(
+ f'No state_dict found in checkpoint file {filename}')
+ # get state_dict from checkpoint
+ if 'state_dict' in checkpoint:
+ state_dict = checkpoint['state_dict']
+ elif 'model' in checkpoint:
+ state_dict = checkpoint['model']
+ else:
+ state_dict = checkpoint
+ # strip prefix of state_dict
+ if list(state_dict.keys())[0].startswith('module.'):
+ state_dict = {k[7:]: v for k, v in state_dict.items()}
+ # for upper net weights only
+ if list(state_dict.keys())[0].startswith('backbone.'):
+ print('Start stripping upper net pre-fix and loading backbone weights to our swin encoder')
+ state_dict = {k.replace('backbone.', ''): v for k, v in state_dict.items() if k.startswith('backbone.')}
+ # for MoBY, load model of online branch
+ if sorted(list(state_dict.keys()))[0].startswith('encoder'):
+ state_dict = {k.replace('encoder.', ''): v for k, v in state_dict.items() if k.startswith('encoder.')}
+
+ # reshape absolute position embedding
+ if state_dict.get('absolute_pos_embed') is not None:
+ absolute_pos_embed = state_dict['absolute_pos_embed']
+ N1, L, C1 = absolute_pos_embed.size()
+ N2, C2, H, W = model.absolute_pos_embed.size()
+ if N1 != N2 or C1 != C2 or L != H*W:
+ logger.warning("Error in loading absolute_pos_embed, pass")
+ else:
+ state_dict['absolute_pos_embed'] = absolute_pos_embed.view(N2, H, W, C2).permute(0, 3, 1, 2)
+
+ # interpolate position bias table if needed
+ relative_position_bias_table_keys = [k for k in state_dict.keys() if "relative_position_bias_table" in k]
+ for table_key in relative_position_bias_table_keys:
+ table_pretrained = state_dict[table_key]
+ table_current = model.state_dict()[table_key]
+ L1, nH1 = table_pretrained.size()
+ L2, nH2 = table_current.size()
+ if nH1 != nH2:
+ logger.warning(f"Error in loading {table_key}, pass")
+ else:
+ if L1 != L2:
+ S1 = int(L1 ** 0.5)
+ S2 = int(L2 ** 0.5)
+ table_pretrained_resized = F.interpolate(
+ table_pretrained.permute(1, 0).view(1, nH1, S1, S1),
+ size=(S2, S2), mode='bicubic')
+ state_dict[table_key] = table_pretrained_resized.view(nH2, L2).permute(1, 0)
+
+ # load state_dict
+ load_state_dict(model, state_dict, strict, logger)
+ return checkpoint
+
+
+def weights_to_cpu(state_dict):
+ """Copy a model state_dict to cpu.
+
+ Args:
+ state_dict (OrderedDict): Model weights on GPU.
+
+ Returns:
+ OrderedDict: Model weights on GPU.
+ """
+ state_dict_cpu = OrderedDict()
+ for key, val in state_dict.items():
+ state_dict_cpu[key] = val.cpu()
+ return state_dict_cpu
+
+
+def _save_to_state_dict(module, destination, prefix, keep_vars):
+ """Saves module state to `destination` dictionary.
+
+ This method is modified from :meth:`torch.nn.Module._save_to_state_dict`.
+
+ Args:
+ module (nn.Module): The module to generate state_dict.
+ destination (dict): A dict where state will be stored.
+ prefix (str): The prefix for parameters and buffers used in this
+ module.
+ """
+ for name, param in module._parameters.items():
+ if param is not None:
+ destination[prefix + name] = param if keep_vars else param.detach()
+ for name, buf in module._buffers.items():
+ # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d
+ if buf is not None:
+ destination[prefix + name] = buf if keep_vars else buf.detach()
+
+
+def get_state_dict(module, destination=None, prefix='', keep_vars=False):
+ """Returns a dictionary containing a whole state of the module.
+
+ Both parameters and persistent buffers (e.g. running averages) are
+ included. Keys are corresponding parameter and buffer names.
+
+ This method is modified from :meth:`torch.nn.Module.state_dict` to
+ recursively check parallel module in case that the model has a complicated
+ structure, e.g., nn.Module(nn.Module(DDP)).
+
+ Args:
+ module (nn.Module): The module to generate state_dict.
+ destination (OrderedDict): Returned dict for the state of the
+ module.
+ prefix (str): Prefix of the key.
+ keep_vars (bool): Whether to keep the variable property of the
+ parameters. Default: False.
+
+ Returns:
+ dict: A dictionary containing a whole state of the module.
+ """
+ # recursively check parallel module in case that the model has a
+ # complicated structure, e.g., nn.Module(nn.Module(DDP))
+ if is_module_wrapper(module):
+ module = module.module
+
+ # below is the same as torch.nn.Module.state_dict()
+ if destination is None:
+ destination = OrderedDict()
+ destination._metadata = OrderedDict()
+ destination._metadata[prefix[:-1]] = local_metadata = dict(
+ version=module._version)
+ _save_to_state_dict(module, destination, prefix, keep_vars)
+ for name, child in module._modules.items():
+ if child is not None:
+ get_state_dict(
+ child, destination, prefix + name + '.', keep_vars=keep_vars)
+ for hook in module._state_dict_hooks.values():
+ hook_result = hook(module, destination, prefix, local_metadata)
+ if hook_result is not None:
+ destination = hook_result
+ return destination
+
+
+def save_checkpoint(model, filename, optimizer=None, meta=None):
+ """Save checkpoint to file.
+
+ The checkpoint will have 3 fields: ``meta``, ``state_dict`` and
+ ``optimizer``. By default ``meta`` will contain version and time info.
+
+ Args:
+ model (Module): Module whose params are to be saved.
+ filename (str): Checkpoint filename.
+ optimizer (:obj:`Optimizer`, optional): Optimizer to be saved.
+ meta (dict, optional): Metadata to be saved in checkpoint.
+ """
+ if meta is None:
+ meta = {}
+ elif not isinstance(meta, dict):
+ raise TypeError(f'meta must be a dict or None, but got {type(meta)}')
+ meta.update(mmcv_version=mmcv.__version__, time=time.asctime())
+
+ if is_module_wrapper(model):
+ model = model.module
+
+ if hasattr(model, 'CLASSES') and model.CLASSES is not None:
+ # save class name to the meta
+ meta.update(CLASSES=model.CLASSES)
+
+ checkpoint = {
+ 'meta': meta,
+ 'state_dict': weights_to_cpu(get_state_dict(model))
+ }
+ # save optimizer state dict in the checkpoint
+ if isinstance(optimizer, Optimizer):
+ checkpoint['optimizer'] = optimizer.state_dict()
+ elif isinstance(optimizer, dict):
+ checkpoint['optimizer'] = {}
+ for name, optim in optimizer.items():
+ checkpoint['optimizer'][name] = optim.state_dict()
+
+ if filename.startswith('pavi://'):
+ try:
+ from pavi import modelcloud
+ from pavi.exception import NodeNotFoundError
+ except ImportError:
+ raise ImportError(
+ 'Please install pavi to load checkpoint from modelcloud.')
+ model_path = filename[7:]
+ root = modelcloud.Folder()
+ model_dir, model_name = osp.split(model_path)
+ try:
+ model = modelcloud.get(model_dir)
+ except NodeNotFoundError:
+ model = root.create_training_model(model_dir)
+ with TemporaryDirectory() as tmp_dir:
+ checkpoint_file = osp.join(tmp_dir, model_name)
+ with open(checkpoint_file, 'wb') as f:
+ torch.save(checkpoint, f)
+ f.flush()
+ model.create_file(checkpoint_file, name=model_name)
+ else:
+ mmcv.mkdir_or_exist(osp.dirname(filename))
+ # immediately flush buffer
+ with open(filename, 'wb') as f:
+ torch.save(checkpoint, f)
+ f.flush()
diff --git a/lib/multimodal_segmentation.py b/lib/multimodal_segmentation.py
new file mode 100644
index 0000000000000000000000000000000000000000..51ad9c974b7923164baac8933e4041e2d96e7806
--- /dev/null
+++ b/lib/multimodal_segmentation.py
@@ -0,0 +1,144 @@
+import torch
+import torch.nn as nn
+from .mask_predictor import SimpleDecoding
+#from .backbone import MultiModalSwinTransformer
+from .multimodal_swin import MultiModalSwin
+from ._utils import LAVT, LAVTOne
+
+__all__ = ['lavt', 'lavt_one']
+
+
+# LAVT
+def _segm_lavt(pretrained, args):
+ # initialize the SwinTransformer backbone with the specified version
+ if args.swin_type == 'tiny':
+ embed_dim = 96
+ depths = [2, 2, 6, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'small':
+ embed_dim = 96
+ depths = [2, 2, 18, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'base':
+ embed_dim = 128
+ depths = [2, 2, 18, 2]
+ num_heads = [4, 8, 16, 32]
+ elif args.swin_type == 'large':
+ embed_dim = 192
+ depths = [2, 2, 18, 2]
+ num_heads = [6, 12, 24, 48]
+ else:
+ assert False
+ # args.window12 added for test.py because state_dict is loaded after model initialization
+ if 'window12' in pretrained or args.window12:
+ print('Window size 12!')
+ window_size = 12
+ else:
+ window_size = 7
+
+ if args.mha:
+ mha = args.mha.split('-') # if non-empty, then ['a', 'b', 'c', 'd']
+ mha = [int(a) for a in mha]
+ else:
+ mha = [1, 1, 1, 1]
+
+ out_indices = (0, 1, 2, 3)
+ backbone = MultiModalSwin(embed_dim=embed_dim, depths=depths, num_heads=num_heads,
+ window_size=window_size,
+ ape=False, drop_path_rate=0.3, patch_norm=True,
+ out_indices=out_indices,
+ use_checkpoint=False, num_heads_fusion=mha,
+ fusion_drop=args.fusion_drop
+ )
+ if pretrained:
+ print('Initializing Multi-modal Swin Transformer weights from ' + pretrained)
+ backbone.init_weights(pretrained=pretrained)
+ else:
+ print('Randomly initialize Multi-modal Swin Transformer weights.')
+ backbone.init_weights()
+
+ model_map = [SimpleDecoding, LAVT]
+
+ classifier = model_map[0](8*embed_dim)
+ base_model = model_map[1]
+
+ model = base_model(backbone, classifier)
+ return model
+
+
+def _load_model_lavt(pretrained, args):
+ model = _segm_lavt(pretrained, args)
+ return model
+
+
+def lavt(pretrained='', args=None):
+ return _load_model_lavt(pretrained, args)
+
+
+###############################################
+# LAVT One: put BERT inside the overall model #
+###############################################
+def _segm_lavt_one(pretrained, args):
+ # initialize the SwinTransformer backbone with the specified version
+ if args.swin_type == 'tiny':
+ embed_dim = 96
+ depths = [2, 2, 6, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'small':
+ embed_dim = 96
+ depths = [2, 2, 18, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'base':
+ embed_dim = 128
+ depths = [2, 2, 18, 2]
+ num_heads = [4, 8, 16, 32]
+ elif args.swin_type == 'large':
+ embed_dim = 192
+ depths = [2, 2, 18, 2]
+ num_heads = [6, 12, 24, 48]
+ else:
+ assert False
+ # args.window12 added for test.py because state_dict is loaded after model initialization
+ if 'window12' in pretrained or args.window12:
+ print('Window size 12!')
+ window_size = 12
+ else:
+ window_size = 7
+
+ if args.mha:
+ mha = args.mha.split('-') # if non-empty, then ['a', 'b', 'c', 'd']
+ mha = [int(a) for a in mha]
+ else:
+ mha = [1, 1, 1, 1]
+
+ out_indices = (0, 1, 2, 3)
+ backbone = MultiModalSwinTransformer(embed_dim=embed_dim, depths=depths, num_heads=num_heads,
+ window_size=window_size,
+ ape=False, drop_path_rate=0.3, patch_norm=True,
+ out_indices=out_indices,
+ use_checkpoint=False, num_heads_fusion=mha,
+ fusion_drop=args.fusion_drop
+ )
+ if pretrained:
+ print('Initializing Multi-modal Swin Transformer weights from ' + pretrained)
+ backbone.init_weights(pretrained=pretrained)
+ else:
+ print('Randomly initialize Multi-modal Swin Transformer weights.')
+ backbone.init_weights()
+
+ model_map = [SimpleDecoding, LAVTOne]
+
+ classifier = model_map[0](8*embed_dim)
+ base_model = model_map[1]
+
+ model = base_model(backbone, classifier, args)
+ return model
+
+
+def _load_model_lavt_one(pretrained, args):
+ model = _segm_lavt_one(pretrained, args)
+ return model
+
+
+def lavt_one(pretrained='', args=None):
+ return _load_model_lavt_one(pretrained, args)
diff --git a/lib/multimodal_segmentation_ppm.py b/lib/multimodal_segmentation_ppm.py
new file mode 100644
index 0000000000000000000000000000000000000000..1b496e2b19a3a88386f9dceafb5e25dbaa8a8c5d
--- /dev/null
+++ b/lib/multimodal_segmentation_ppm.py
@@ -0,0 +1,144 @@
+import torch
+import torch.nn as nn
+from .mask_predictor import SimpleDecoding
+#from .backbone import MultiModalSwinTransformer
+from .multimodal_swin_ppm import MultiModalSwin
+from ._utils import LAVT, LAVTOne
+
+__all__ = ['lavt', 'lavt_one']
+
+
+# LAVT
+def _segm_lavt(pretrained, args):
+ # initialize the SwinTransformer backbone with the specified version
+ if args.swin_type == 'tiny':
+ embed_dim = 96
+ depths = [2, 2, 6, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'small':
+ embed_dim = 96
+ depths = [2, 2, 18, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'base':
+ embed_dim = 128
+ depths = [2, 2, 18, 2]
+ num_heads = [4, 8, 16, 32]
+ elif args.swin_type == 'large':
+ embed_dim = 192
+ depths = [2, 2, 18, 2]
+ num_heads = [6, 12, 24, 48]
+ else:
+ assert False
+ # args.window12 added for test.py because state_dict is loaded after model initialization
+ if 'window12' in pretrained or args.window12:
+ print('Window size 12!')
+ window_size = 12
+ else:
+ window_size = 7
+
+ if args.mha:
+ mha = args.mha.split('-') # if non-empty, then ['a', 'b', 'c', 'd']
+ mha = [int(a) for a in mha]
+ else:
+ mha = [1, 1, 1, 1]
+
+ out_indices = (0, 1, 2, 3)
+ backbone = MultiModalSwin(embed_dim=embed_dim, depths=depths, num_heads=num_heads,
+ window_size=window_size,
+ ape=False, drop_path_rate=0.3, patch_norm=True,
+ out_indices=out_indices,
+ use_checkpoint=False, num_heads_fusion=mha,
+ fusion_drop=args.fusion_drop
+ )
+ if pretrained:
+ print('Initializing Multi-modal Swin Transformer weights from ' + pretrained)
+ backbone.init_weights(pretrained=pretrained)
+ else:
+ print('Randomly initialize Multi-modal Swin Transformer weights.')
+ backbone.init_weights()
+
+ model_map = [SimpleDecoding, LAVT]
+
+ classifier = model_map[0](8*embed_dim)
+ base_model = model_map[1]
+
+ model = base_model(backbone, classifier)
+ return model
+
+
+def _load_model_lavt(pretrained, args):
+ model = _segm_lavt(pretrained, args)
+ return model
+
+
+def lavt(pretrained='', args=None):
+ return _load_model_lavt(pretrained, args)
+
+
+###############################################
+# LAVT One: put BERT inside the overall model #
+###############################################
+def _segm_lavt_one(pretrained, args):
+ # initialize the SwinTransformer backbone with the specified version
+ if args.swin_type == 'tiny':
+ embed_dim = 96
+ depths = [2, 2, 6, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'small':
+ embed_dim = 96
+ depths = [2, 2, 18, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'base':
+ embed_dim = 128
+ depths = [2, 2, 18, 2]
+ num_heads = [4, 8, 16, 32]
+ elif args.swin_type == 'large':
+ embed_dim = 192
+ depths = [2, 2, 18, 2]
+ num_heads = [6, 12, 24, 48]
+ else:
+ assert False
+ # args.window12 added for test.py because state_dict is loaded after model initialization
+ if 'window12' in pretrained or args.window12:
+ print('Window size 12!')
+ window_size = 12
+ else:
+ window_size = 7
+
+ if args.mha:
+ mha = args.mha.split('-') # if non-empty, then ['a', 'b', 'c', 'd']
+ mha = [int(a) for a in mha]
+ else:
+ mha = [1, 1, 1, 1]
+
+ out_indices = (0, 1, 2, 3)
+ backbone = MultiModalSwinTransformer(embed_dim=embed_dim, depths=depths, num_heads=num_heads,
+ window_size=window_size,
+ ape=False, drop_path_rate=0.3, patch_norm=True,
+ out_indices=out_indices,
+ use_checkpoint=False, num_heads_fusion=mha,
+ fusion_drop=args.fusion_drop
+ )
+ if pretrained:
+ print('Initializing Multi-modal Swin Transformer weights from ' + pretrained)
+ backbone.init_weights(pretrained=pretrained)
+ else:
+ print('Randomly initialize Multi-modal Swin Transformer weights.')
+ backbone.init_weights()
+
+ model_map = [SimpleDecoding, LAVTOne]
+
+ classifier = model_map[0](8*embed_dim)
+ base_model = model_map[1]
+
+ model = base_model(backbone, classifier, args)
+ return model
+
+
+def _load_model_lavt_one(pretrained, args):
+ model = _segm_lavt_one(pretrained, args)
+ return model
+
+
+def lavt_one(pretrained='', args=None):
+ return _load_model_lavt_one(pretrained, args)
diff --git a/lib/multimodal_swin.py b/lib/multimodal_swin.py
new file mode 100644
index 0000000000000000000000000000000000000000..29167bbfc74001b06f3f860a50c58f21720c7b78
--- /dev/null
+++ b/lib/multimodal_swin.py
@@ -0,0 +1,275 @@
+
+from .backbone import MultiModalSwinTransformer
+import torch.nn as nn
+import numpy as np
+import torch
+
+def window_partition(x, window_size):
+ """
+ Args:
+ x: (B, H, W, C)
+ window_size (int): window size
+
+ Returns:
+ windows: (num_windows*B, window_size, window_size, C)
+ """
+ B, H, W, C = x.shape
+ x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
+ windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
+ return windows
+
+
+class MultiModalSwin(MultiModalSwinTransformer):
+ def __init__(self,
+ pretrain_img_size=224,
+ patch_size=4,
+ in_chans=3,
+ embed_dim=96,
+ depths=[2, 2, 6, 2],
+ num_heads=[3, 6, 12, 24],
+ window_size=7,
+ mlp_ratio=4.,
+ qkv_bias=True,
+ qk_scale=None,
+ drop_rate=0.,
+ attn_drop_rate=0.,
+ drop_path_rate=0.2,
+ norm_layer=nn.LayerNorm,
+ ape=False,
+ patch_norm=True,
+ out_indices=(0, 1, 2, 3),
+ frozen_stages=-1,
+ use_checkpoint=False,
+ num_heads_fusion=[1, 1, 1, 1],
+ fusion_drop=0.0
+ ):
+ super().__init__(pretrain_img_size=pretrain_img_size,
+ patch_size=patch_size,
+ in_chans=in_chans,
+ embed_dim=embed_dim,
+ depths=depths,
+ num_heads=num_heads,
+ window_size=window_size,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop_rate=drop_rate,
+ attn_drop_rate=attn_drop_rate,
+ drop_path_rate=drop_path_rate,
+ norm_layer=norm_layer,
+ ape=ape,
+ patch_norm=patch_norm,
+ out_indices=out_indices,
+ frozen_stages=frozen_stages,
+ use_checkpoint=use_checkpoint,
+ num_heads_fusion=num_heads_fusion,
+ fusion_drop=fusion_drop
+ )
+ self.window_size = window_size
+ self.shift_size = window_size // 2
+ self.use_checkpoint = use_checkpoint
+
+ def forward_stem(self, x):
+ x = self.patch_embed(x)
+
+ Wh, Ww = x.size(2), x.size(3)
+ if self.ape:
+ # interpolate the position embedding to the corresponding size
+ absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
+ x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
+ else:
+ x = x.flatten(2).transpose(1, 2)
+ x = self.pos_drop(x)
+ return x, Wh, Ww
+
+ def forward_stage1(self, x, H, W):
+ #print("stage1", x.shape)
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ for blk in self.layers[0].blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
+
+ return x
+
+
+ def forward_stage2(self, x, H, W):
+ #print("stage2", x.shape)
+ #H, W = x.size(2), x.size(3)
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ for blk in self.layers[1].blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
+ return x
+
+ def forward_stage3(self, x, H, W):
+ #H, W = x.size(2), x.size(3)
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ for blk in self.layers[2].blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
+ return x
+
+ def forward_stage4(self, x, H, W):
+ #H, W = x.size(2), x.size(3)
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ for blk in self.layers[3].blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
+ return x
+
+ def forward_pwam1(self, x, H, W, l, l_mask):
+ # PWAM fusion
+ x_residual = self.layers[0].fusion(x, l, l_mask)
+ # apply a gate on the residual
+ x = x + (self.layers[0].res_gate(x_residual) * x_residual)
+
+ x_residual = self.norm0(x_residual)
+ x_residual = x_residual.view(-1, H, W, self.num_features[0]).permute(0, 3, 1, 2).contiguous()
+
+
+ if self.layers[0].downsample is not None:
+ x_down = self.layers[0].downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x_residual, H, W, x_down, Wh, Ww
+ else:
+ return x_residual, H, W, x, H, W
+
+ def forward_pwam2(self, x, H, W, l, l_mask):
+ # PWAM fusion
+ x_residual = self.layers[1].fusion(x, l, l_mask)
+ # apply a gate on the residual
+ x = x + (self.layers[1].res_gate(x_residual) * x_residual)
+
+ x_residual = self.norm1(x_residual)
+ x_residual = x_residual.view(-1, H, W, self.num_features[1]).permute(0, 3, 1, 2).contiguous()
+ if self.layers[1].downsample is not None:
+ x_down = self.layers[1].downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x_residual, H, W, x_down, Wh, Ww
+ else:
+ return x_residual, H, W, x, H, W
+
+ def forward_pwam3(self, x, H, W, l, l_mask):
+ # PWAM fusion
+ x_residual = self.layers[2].fusion(x, l, l_mask)
+ # apply a gate on the residual
+ x = x + (self.layers[2].res_gate(x_residual) * x_residual)
+
+ x_residual = self.norm2(x_residual)
+ x_residual = x_residual.view(-1, H, W, self.num_features[2]).permute(0, 3, 1, 2).contiguous()
+ if self.layers[2].downsample is not None:
+ x_down = self.layers[2].downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x_residual, H, W, x_down, Wh, Ww
+ else:
+ return x_residual, H, W, x, H, W
+
+ def forward_pwam4(self, x, H, W, l, l_mask):
+ # PWAM fusion
+ x_residual = self.layers[3].fusion(x, l, l_mask)
+ # apply a gate on the residual
+ x = x + (self.layers[3].res_gate(x_residual) * x_residual)
+ x_residual = self.norm3(x_residual)
+ x_residual = x_residual.view(-1, H, W, self.num_features[3]).permute(0, 3, 1, 2).contiguous()
+ if self.layers[3].downsample is not None:
+ x_down = self.layers[3].downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x_residual, H, W, x_down, Wh, Ww
+ else:
+ return x_residual, H, W, x, H, W
diff --git a/lib/multimodal_swin_ppm.py b/lib/multimodal_swin_ppm.py
new file mode 100644
index 0000000000000000000000000000000000000000..c23e1a32de36f1c4f8239eaf6adf66021538f0ea
--- /dev/null
+++ b/lib/multimodal_swin_ppm.py
@@ -0,0 +1,413 @@
+
+import torch.nn.functional as F
+from .backbone_ppm import MultiModalSwinTransformer
+import torch.nn as nn
+import numpy as np
+import torch
+
+def window_partition(x, window_size):
+ """
+ Args:
+ x: (B, H, W, C)
+ window_size (int): window size
+
+ Returns:
+ windows: (num_windows*B, window_size, window_size, C)
+ """
+ B, H, W, C = x.shape
+ x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
+ windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
+ return windows
+
+
+class MultiModalSwin(MultiModalSwinTransformer):
+ def __init__(self,
+ pretrain_img_size=224,
+ patch_size=4,
+ in_chans=3,
+ embed_dim=96,
+ depths=[2, 2, 6, 2],
+ num_heads=[3, 6, 12, 24],
+ window_size=7,
+ mlp_ratio=4.,
+ qkv_bias=True,
+ qk_scale=None,
+ drop_rate=0.,
+ attn_drop_rate=0.,
+ drop_path_rate=0.2,
+ norm_layer=nn.LayerNorm,
+ ape=False,
+ patch_norm=True,
+ out_indices=(0, 1, 2, 3),
+ frozen_stages=-1,
+ use_checkpoint=False,
+ num_heads_fusion=[1, 1, 1, 1],
+ fusion_drop=0.0
+ ):
+ super().__init__(pretrain_img_size=pretrain_img_size,
+ patch_size=patch_size,
+ in_chans=in_chans,
+ embed_dim=embed_dim,
+ depths=depths,
+ num_heads=num_heads,
+ window_size=window_size,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop_rate=drop_rate,
+ attn_drop_rate=attn_drop_rate,
+ drop_path_rate=drop_path_rate,
+ norm_layer=norm_layer,
+ ape=ape,
+ patch_norm=patch_norm,
+ out_indices=out_indices,
+ frozen_stages=frozen_stages,
+ use_checkpoint=use_checkpoint,
+ num_heads_fusion=num_heads_fusion,
+ fusion_drop=fusion_drop
+ )
+ self.window_size = window_size
+ self.shift_size = window_size // 2
+ self.use_checkpoint = use_checkpoint
+
+ def forward_stem(self, x):
+ x = self.patch_embed(x)
+
+ Wh, Ww = x.size(2), x.size(3)
+ if self.ape:
+ # interpolate the position embedding to the corresponding size
+ absolute_pos_embed = F.interpolate(self.absolute_pos_embed, size=(Wh, Ww), mode='bicubic')
+ x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
+ else:
+ x = x.flatten(2).transpose(1, 2)
+ x = self.pos_drop(x)
+ return x, Wh, Ww
+
+ def forward_stage1(self, x, H, W):
+ #print("stage1", x.shape)
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ for blk in self.layers[0].blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
+
+ return x
+
+
+ def forward_stage2(self, x, H, W):
+ #print("stage2", x.shape)
+ #H, W = x.size(2), x.size(3)
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ for blk in self.layers[1].blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
+ return x
+
+ def forward_stage3(self, x, H, W):
+ #H, W = x.size(2), x.size(3)
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ for blk in self.layers[2].blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
+ return x
+
+ def forward_stage4(self, x, H, W):
+ #H, W = x.size(2), x.size(3)
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ for blk in self.layers[3].blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask) # output of a Block has shape (B, H*W, dim)
+ return x
+
+ def forward_pwam1(self, x, H, W, l, l_mask):
+ ## PWAM fusion
+ #x_residual = self.layers[0].fusion(x, l, l_mask)
+ ## apply a gate on the residual
+ #x = x + (self.layers[0].res_gate(x_residual) * x_residual)
+
+ #x_residual = self.norm0(x_residual)
+ #x_residual = x_residual.view(-1, H, W, self.num_features[0]).permute(0, 3, 1, 2).contiguous()
+
+ out = []
+
+ #torch.Size([2, 32, 1, 1])
+ #torch.Size([2, 1, 32])
+ #P3WAM
+ x_reshape = x.permute(0,2,1).view(x.shape[0], x.shape[2], H, W)
+ x_size = x_reshape.size()
+ for i, p in enumerate(self.layers[0].psizes):
+ px = self.layers[0].pyramids[i](x_reshape)
+ px = px.flatten(2).permute(0,2,1)
+ #print(px.shape)
+ px_residual = self.layers[0].fusions[i](px, l, l_mask)
+ px_residual = px_residual.permute(0,2,1).view(x.shape[0], self.layers[0].reduction_dim , p, p)
+
+ #print(px_residual.shape)
+ out.append(F.interpolate(px_residual, x_size[2:], mode='bilinear', align_corners=True).flatten(2).permute(0,2,1))
+
+ # PWAM fusion
+ #x_residual = self.fusion(x, l, l_mask)
+ ## apply a gate on the residual
+ #x = x + (self.res_gate(x_residual) * x_residual)
+
+ # PWAM fusion
+ x_residual = self.layers[0].fusion(x, l, l_mask)
+ out.append(x_residual)
+ # apply a gate on the residual
+ x = x + (self.layers[0].res_gate(x_residual) * x_residual)
+
+ #print('---')
+ #for o in out:
+ # print(o.shape)
+
+
+ x_residual = self.layers[0].mixer(torch.cat(out, dim =2))
+ x_residual = x_residual.view(-1, H, W, self.num_features[0]).permute(0, 3, 1, 2).contiguous()
+
+ if self.layers[0].downsample is not None:
+ x_down = self.layers[0].downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x_residual, H, W, x_down, Wh, Ww
+ else:
+ return x_residual, H, W, x, H, W
+
+ def forward_pwam2(self, x, H, W, l, l_mask):
+ # PWAM fusion
+ #x_residual = self.layers[1].fusion(x, l, l_mask)
+ # apply a gate on the residual
+ #x = x + (self.layers[1].res_gate(x_residual) * x_residual)
+
+ #x_residual = self.norm1(x_residual)
+ #x_residual = x_residual.view(-1, H, W, self.num_features[1]).permute(0, 3, 1, 2).contiguous()
+ out = []
+
+ #torch.Size([2, 32, 1, 1])
+ #torch.Size([2, 1, 32])
+ #P3WAM
+ x_reshape = x.permute(0,2,1).view(x.shape[0], x.shape[2], H, W)
+ x_size = x_reshape.size()
+ for i, p in enumerate(self.layers[1].psizes):
+ px = self.layers[1].pyramids[i](x_reshape)
+ px = px.flatten(2).permute(0,2,1)
+ #print(px.shape)
+ px_residual = self.layers[1].fusions[i](px, l, l_mask)
+ px_residual = px_residual.permute(0,2,1).view(x.shape[0], self.layers[1].reduction_dim , p, p)
+ #print(px_residual.shape)
+ out.append(F.interpolate(px_residual, x_size[2:], mode='bilinear', align_corners=True).flatten(2).permute(0,2,1))
+
+ # PWAM fusion
+ #x_residual = self.fusion(x, l, l_mask)
+ ## apply a gate on the residual
+ #x = x + (self.res_gate(x_residual) * x_residual)
+
+ # PWAM fusion
+ x_residual = self.layers[1].fusion(x, l, l_mask)
+ out.append(x_residual)
+ # apply a gate on the residual
+ x = x + (self.layers[1].res_gate(x_residual) * x_residual)
+
+ #print('---')
+ #for o in out:
+ # print(o.shape)
+
+
+ x_residual = self.layers[1].mixer(torch.cat(out, dim =2))
+ x_residual = x_residual.view(-1, H, W, self.num_features[1]).permute(0, 3, 1, 2).contiguous()
+ if self.layers[1].downsample is not None:
+ x_down = self.layers[1].downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x_residual, H, W, x_down, Wh, Ww
+ else:
+ return x_residual, H, W, x, H, W
+
+ def forward_pwam3(self, x, H, W, l, l_mask):
+ # PWAM fusion
+ #x_residual = self.layers[2].fusion(x, l, l_mask)
+ # apply a gate on the residual
+ #x = x + (self.layers[2].res_gate(x_residual) * x_residual)
+
+ #x_residual = self.norm2(x_residual)
+ #x_residual = x_residual.view(-1, H, W, self.num_features[2]).permute(0, 3, 1, 2).contiguous()
+ out = []
+
+ #torch.Size([2, 32, 1, 1])
+ #torch.Size([2, 1, 32])
+ #P3WAM
+ x_reshape = x.permute(0,2,1).view(x.shape[0], x.shape[2], H, W)
+ x_size = x_reshape.size()
+ for i, p in enumerate(self.layers[2].psizes):
+ px = self.layers[2].pyramids[i](x_reshape)
+ px = px.flatten(2).permute(0,2,1)
+ #print(px.shape)
+ px_residual = self.layers[2].fusions[i](px, l, l_mask)
+ px_residual = px_residual.permute(0,2,1).view(x.shape[0], self.layers[2].reduction_dim , p, p)
+ #print(px_residual.shape)
+ out.append(F.interpolate(px_residual, x_size[2:], mode='bilinear', align_corners=True).flatten(2).permute(0,2,1))
+
+ # PWAM fusion
+ #x_residual = self.fusion(x, l, l_mask)
+ ## apply a gate on the residual
+ #x = x + (self.res_gate(x_residual) * x_residual)
+
+ # PWAM fusion
+ x_residual = self.layers[2].fusion(x, l, l_mask)
+ out.append(x_residual)
+ # apply a gate on the residual
+ x = x + (self.layers[2].res_gate(x_residual) * x_residual)
+
+ #print('---')
+ #for o in out:
+ # print(o.shape)
+
+
+ x_residual = self.layers[2].mixer(torch.cat(out, dim =2))
+ x_residual = x_residual.view(-1, H, W, self.num_features[2]).permute(0, 3, 1, 2).contiguous()
+ if self.layers[2].downsample is not None:
+ x_down = self.layers[2].downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x_residual, H, W, x_down, Wh, Ww
+ else:
+ return x_residual, H, W, x, H, W
+
+ def forward_pwam4(self, x, H, W, l, l_mask):
+ ## PWAM fusion
+ #x_residual = self.layers[3].fusion(x, l, l_mask)
+ ## apply a gate on the residual
+ #x = x + (self.layers[3].res_gate(x_residual) * x_residual)
+ #x_residual = self.norm3(x_residual)
+ #x_residual = x_residual.view(-1, H, W, self.num_features[3]).permute(0, 3, 1, 2).contiguous()
+ out = []
+
+ #torch.Size([2, 32, 1, 1])
+ #torch.Size([2, 1, 32])
+ #P3WAM
+ x_reshape = x.permute(0,2,1).view(x.shape[0], x.shape[2], H, W)
+ x_size = x_reshape.size()
+ for i, p in enumerate(self.layers[3].psizes):
+ px = self.layers[3].pyramids[i](x_reshape)
+ px = px.flatten(2).permute(0,2,1)
+ #print(px.shape)
+ px_residual = self.layers[3].fusions[i](px, l, l_mask)
+ px_residual = px_residual.permute(0,2,1).view(x.shape[0], self.layers[3].reduction_dim , p, p)
+ #print(px_residual.shape)
+ out.append(F.interpolate(px_residual, x_size[2:], mode='bilinear', align_corners=True).flatten(2).permute(0,2,1))
+
+ # PWAM fusion
+ #x_residual = self.fusion(x, l, l_mask)
+ ## apply a gate on the residual
+ #x = x + (self.res_gate(x_residual) * x_residual)
+
+ # PWAM fusion
+ x_residual = self.layers[3].fusion(x, l, l_mask)
+ out.append(x_residual)
+ # apply a gate on the residual
+ x = x + (self.layers[3].res_gate(x_residual) * x_residual)
+
+ #print('---')
+ #for o in out:
+ # print(o.shape)
+
+
+ x_residual = self.layers[3].mixer(torch.cat(out, dim =2))
+ x_residual = x_residual.view(-1, H, W, self.num_features[3]).permute(0, 3, 1, 2).contiguous()
+ if self.layers[3].downsample is not None:
+ x_down = self.layers[3].downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x_residual, H, W, x_down, Wh, Ww
+ else:
+ return x_residual, H, W, x, H, W
diff --git a/lib/segmentation.py b/lib/segmentation.py
new file mode 100644
index 0000000000000000000000000000000000000000..13a8da3080a9bec357d1d9343d16a933298b0c8d
--- /dev/null
+++ b/lib/segmentation.py
@@ -0,0 +1,143 @@
+import torch
+import torch.nn as nn
+from .mask_predictor import SimpleDecoding
+from .backbone import MultiModalSwinTransformer
+from ._utils import LAVT, LAVTOne
+
+__all__ = ['lavt', 'lavt_one']
+
+
+# LAVT
+def _segm_lavt(pretrained, args):
+ # initialize the SwinTransformer backbone with the specified version
+ if args.swin_type == 'tiny':
+ embed_dim = 96
+ depths = [2, 2, 6, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'small':
+ embed_dim = 96
+ depths = [2, 2, 18, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'base':
+ embed_dim = 128
+ depths = [2, 2, 18, 2]
+ num_heads = [4, 8, 16, 32]
+ elif args.swin_type == 'large':
+ embed_dim = 192
+ depths = [2, 2, 18, 2]
+ num_heads = [6, 12, 24, 48]
+ else:
+ assert False
+ # args.window12 added for test.py because state_dict is loaded after model initialization
+ if 'window12' in pretrained or args.window12:
+ print('Window size 12!')
+ window_size = 12
+ else:
+ window_size = 7
+
+ if args.mha:
+ mha = args.mha.split('-') # if non-empty, then ['a', 'b', 'c', 'd']
+ mha = [int(a) for a in mha]
+ else:
+ mha = [1, 1, 1, 1]
+
+ out_indices = (0, 1, 2, 3)
+ backbone = MultiModalSwinTransformer(embed_dim=embed_dim, depths=depths, num_heads=num_heads,
+ window_size=window_size,
+ ape=False, drop_path_rate=0.3, patch_norm=True,
+ out_indices=out_indices,
+ use_checkpoint=False, num_heads_fusion=mha,
+ fusion_drop=args.fusion_drop
+ )
+ if pretrained:
+ print('Initializing Multi-modal Swin Transformer weights from ' + pretrained)
+ backbone.init_weights(pretrained=pretrained)
+ else:
+ print('Randomly initialize Multi-modal Swin Transformer weights.')
+ backbone.init_weights()
+
+ model_map = [SimpleDecoding, LAVT]
+
+ classifier = model_map[0](8*embed_dim)
+ base_model = model_map[1]
+
+ model = base_model(backbone, classifier)
+ return model
+
+
+def _load_model_lavt(pretrained, args):
+ model = _segm_lavt(pretrained, args)
+ return model
+
+
+def lavt(pretrained='', args=None):
+ return _load_model_lavt(pretrained, args)
+
+
+###############################################
+# LAVT One: put BERT inside the overall model #
+###############################################
+def _segm_lavt_one(pretrained, args):
+ # initialize the SwinTransformer backbone with the specified version
+ if args.swin_type == 'tiny':
+ embed_dim = 96
+ depths = [2, 2, 6, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'small':
+ embed_dim = 96
+ depths = [2, 2, 18, 2]
+ num_heads = [3, 6, 12, 24]
+ elif args.swin_type == 'base':
+ embed_dim = 128
+ depths = [2, 2, 18, 2]
+ num_heads = [4, 8, 16, 32]
+ elif args.swin_type == 'large':
+ embed_dim = 192
+ depths = [2, 2, 18, 2]
+ num_heads = [6, 12, 24, 48]
+ else:
+ assert False
+ # args.window12 added for test.py because state_dict is loaded after model initialization
+ if 'window12' in pretrained or args.window12:
+ print('Window size 12!')
+ window_size = 12
+ else:
+ window_size = 7
+
+ if args.mha:
+ mha = args.mha.split('-') # if non-empty, then ['a', 'b', 'c', 'd']
+ mha = [int(a) for a in mha]
+ else:
+ mha = [1, 1, 1, 1]
+
+ out_indices = (0, 1, 2, 3)
+ backbone = MultiModalSwinTransformer(embed_dim=embed_dim, depths=depths, num_heads=num_heads,
+ window_size=window_size,
+ ape=False, drop_path_rate=0.3, patch_norm=True,
+ out_indices=out_indices,
+ use_checkpoint=False, num_heads_fusion=mha,
+ fusion_drop=args.fusion_drop
+ )
+ if pretrained:
+ print('Initializing Multi-modal Swin Transformer weights from ' + pretrained)
+ backbone.init_weights(pretrained=pretrained)
+ else:
+ print('Randomly initialize Multi-modal Swin Transformer weights.')
+ backbone.init_weights()
+
+ model_map = [SimpleDecoding, LAVTOne]
+
+ classifier = model_map[0](8*embed_dim)
+ base_model = model_map[1]
+
+ model = base_model(backbone, classifier, args)
+ return model
+
+
+def _load_model_lavt_one(pretrained, args):
+ model = _segm_lavt_one(pretrained, args)
+ return model
+
+
+def lavt_one(pretrained='', args=None):
+ return _load_model_lavt_one(pretrained, args)
diff --git a/mask2former_utils/DataTools.py b/mask2former_utils/DataTools.py
new file mode 100644
index 0000000000000000000000000000000000000000..c922ef68b250cc712326f746368b8129453048d4
--- /dev/null
+++ b/mask2former_utils/DataTools.py
@@ -0,0 +1,28 @@
+#!/usr/bin/env python
+# -*- encoding: utf-8 -*-
+'''
+@File : DataTools.py
+@Time : 2022/09/30 07:47:36
+@Author : zzubqh
+@Version : 1.0
+@Contact : baiqh@microport.com
+@License : (C)Copyright 2017-2018, Liugroup-NLPR-CASIA
+@Desc : 数据集预处理类
+'''
+
+# here put the import lib
+import os
+
+def create_annotations():
+ root_dir = r'/data/Dental'
+ img_dir = os.path.join(root_dir, 'img')
+ label_dir = os.path.join(root_dir, 'label')
+ annotation = 'tooth_label.md'
+ with open(annotation, 'w', encoding='utf-8') as wf:
+ for img_file in os.listdir(img_dir):
+ mask_file = os.path.join(label_dir, img_file.split('.')[0] + '_seg.nii.gz')
+ if os.path.exists(mask_file):
+ wf.write(f'{os.path.join(img_dir, img_file)},{mask_file}\r')
+
+if __name__ == '__main__':
+ create_annotations()
diff --git a/mask2former_utils/DatasetAnalyzer.py b/mask2former_utils/DatasetAnalyzer.py
new file mode 100644
index 0000000000000000000000000000000000000000..34e81856a5692e67510bb48ea6047d99c37dd1a7
--- /dev/null
+++ b/mask2former_utils/DatasetAnalyzer.py
@@ -0,0 +1,104 @@
+#!/usr/bin/env python
+# -*- encoding: utf-8 -*-
+'''
+@File : DatasetAnalyzer.py
+@Time : 2022/04/08 10:10:12
+@Author : zzubqh
+@Version : 1.0
+@Contact : baiqh@microport.com
+@License : (C)Copyright 2017-2018, Liugroup-NLPR-CASIA
+@Desc : None
+'''
+
+# here put the import lib
+
+import numpy as np
+import os
+import SimpleITK as sitk
+from multiprocessing import Pool
+
+
+class DatasetAnalyzer(object):
+ """
+ 接收一个类似train.md的文件
+ 格式:**/ct_file.nii.gz, */seg_file.nii.gz
+ """
+ def __init__(self, annotation_file, num_processes=4):
+ self.dataset = []
+ self.num_processes = num_processes
+ with open(annotation_file, 'r', encoding='utf-8') as rf:
+ for line_item in rf:
+ items = line_item.strip().split(',')
+ self.dataset.append({'ct': items[0], 'mask': items[1]})
+
+ print('total load {0} ct files'.format(len(self.dataset)))
+
+ def _get_effective_data(self, dataset_item: dict):
+ itk_img = sitk.ReadImage(dataset_item['ct'])
+ itk_mask = sitk.ReadImage(dataset_item['mask'])
+
+ img_np = sitk.GetArrayFromImage(itk_img)
+ mask_np = sitk.GetArrayFromImage(itk_mask)
+
+ mask_index = mask_np > 0
+ effective_data = img_np[mask_index][::10]
+ return list(effective_data)
+
+ def compute_stats(self):
+ if len(self.dataset) == 0:
+ return np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan
+
+ process_pool = Pool(self.num_processes)
+ data_value = process_pool.map(self._get_effective_data, self.dataset)
+
+ print('sub process end, get {0} case data'.format(len(data_value)))
+ voxels = []
+ for value in data_value:
+ voxels += value
+
+ median = np.median(voxels)
+ mean = np.mean(voxels)
+ sd = np.std(voxels)
+ mn = np.min(voxels)
+ mx = np.max(voxels)
+ percentile_99_5 = np.percentile(voxels, 99.5)
+ percentile_00_5 = np.percentile(voxels, 00.5)
+
+ process_pool.close()
+ process_pool.join()
+ return median, mean, sd, mn, mx, percentile_99_5, percentile_00_5
+
+
+if __name__ == '__main__':
+ import tqdm
+ annotation = r'/home/code/Dental/Segmentation/dataset/tooth_label.md'
+ analyzer = DatasetAnalyzer(annotation, num_processes=8)
+ out_dir = r'/data/Dental/SegTrainingClipdata'
+ # t = analyzer.compute_stats()
+ # print(t)
+
+ # new_annotation = r'/home/code/BoneSegLandmark/dataset/knee_clip_label_seg.md'
+ # wf = open(new_annotation, 'w', encoding='utf-8')
+ # with open(annotation, 'r', encoding='utf-8') as rf:
+ # for str_line in rf:
+ # items = str_line.strip().split(',')
+ # ct_name = os.path.basename(items[0])
+ # new_ct_path = os.path.join(out_dir, ct_name)
+ # label_file = items[1]
+ # wf.write('{0},{1}\r'.format(new_ct_path, label_file))
+ # wf.close()
+
+ # 根据CT值的范围重新生成新CT
+ for item in tqdm.tqdm(analyzer.dataset):
+ ct_file = item['ct']
+ out_name = os.path.basename(ct_file)
+ out_path = os.path.join(out_dir, out_name)
+ itk_img = sitk.ReadImage(item['ct'])
+ img_np = sitk.GetArrayFromImage(itk_img)
+ data = np.clip(img_np, 181.0, 7578.0)
+ clip_img = sitk.GetImageFromArray(data)
+ clip_img.CopyInformation(itk_img)
+ sitk.WriteImage(clip_img, out_path)
+
+
+
diff --git a/mask2former_utils/__init__.py b/mask2former_utils/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/mask2former_utils/__pycache__/__init__.cpython-37.pyc b/mask2former_utils/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..64f8243c2b95257641037e5d52305c315e1a91ae
Binary files /dev/null and b/mask2former_utils/__pycache__/__init__.cpython-37.pyc differ
diff --git a/mask2former_utils/__pycache__/criterion.cpython-37.pyc b/mask2former_utils/__pycache__/criterion.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..947023b7f417f56e08f82a103f89f832e2999442
Binary files /dev/null and b/mask2former_utils/__pycache__/criterion.cpython-37.pyc differ
diff --git a/mask2former_utils/__pycache__/matcher.cpython-37.pyc b/mask2former_utils/__pycache__/matcher.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e056e774fffb00f5b68451e82821780a2f629b03
Binary files /dev/null and b/mask2former_utils/__pycache__/matcher.cpython-37.pyc differ
diff --git a/mask2former_utils/__pycache__/misc.cpython-37.pyc b/mask2former_utils/__pycache__/misc.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..376d2a406599108f4528fe193cf5e86646500843
Binary files /dev/null and b/mask2former_utils/__pycache__/misc.cpython-37.pyc differ
diff --git a/mask2former_utils/__pycache__/point_features.cpython-37.pyc b/mask2former_utils/__pycache__/point_features.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..d0c7c6767c73b8238589fb786162477ac3eba8a0
Binary files /dev/null and b/mask2former_utils/__pycache__/point_features.cpython-37.pyc differ
diff --git a/mask2former_utils/criterion.py b/mask2former_utils/criterion.py
new file mode 100644
index 0000000000000000000000000000000000000000..29f0d4347b59a31bc81aa4fbefcea07b44305e7c
--- /dev/null
+++ b/mask2former_utils/criterion.py
@@ -0,0 +1,377 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/models/detr.py
+"""
+MaskFormer criterion.
+"""
+import torch
+import numpy as np
+import torch.nn.functional as F
+import torch.distributed as dist
+from torch import nn
+import sys
+import os
+sys.path.append(os.path.dirname(__file__) + os.sep + '../')
+
+from .point_features import point_sample, get_uncertain_point_coords_with_randomness
+from .misc import is_dist_avail_and_initialized, nested_tensor_from_tensor_list, get_world_size
+
+
+def dice_loss(
+ inputs: torch.Tensor,
+ targets: torch.Tensor,
+ num_masks: float,
+ ):
+ """
+ Compute the DICE loss, similar to generalized IOU for masks
+ Args:
+ inputs: A float tensor of arbitrary shape.
+ The predictions for each example.
+ targets: A float tensor with the same shape as inputs. Stores the binary
+ classification label for each element in inputs
+ (0 for the negative class and 1 for the positive class).
+ """
+ inputs = inputs.sigmoid()
+ inputs = inputs.flatten(1)
+ numerator = 2 * (inputs * targets).sum(-1)
+ denominator = inputs.sum(-1) + targets.sum(-1)
+ loss = 1 - (numerator + 1) / (denominator + 1)
+ return loss.sum() / num_masks
+
+
+def sigmoid_ce_loss(
+ inputs: torch.Tensor,
+ targets: torch.Tensor,
+ num_masks: float,
+ ):
+ """
+ Args:
+ inputs: A float tensor of arbitrary shape.
+ The predictions for each example.
+ targets: A float tensor with the same shape as inputs. Stores the binary
+ classification label for each element in inputs
+ (0 for the negative class and 1 for the positive class).
+ Returns:
+ Loss tensor
+ """
+ loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none")
+ return loss.mean(1).sum() / num_masks
+
+def sigmoid_focal_loss(inputs, targets, num_masks, alpha: float = 0.25, gamma: float = 2):
+ """
+ Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.
+ Args:
+ inputs: A float tensor of arbitrary shape.
+ The predictions for each example.
+ targets: A float tensor with the same shape as inputs. Stores the binary
+ classification label for each element in inputs
+ (0 for the negative class and 1 for the positive class).
+ alpha: (optional) Weighting factor in range (0,1) to balance
+ positive vs negative examples. Default = -1 (no weighting).
+ gamma: Exponent of the modulating factor (1 - p_t) to
+ balance easy vs hard examples.
+ Returns:
+ Loss tensor
+ """
+ prob = inputs.sigmoid()
+ ce_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction="none")
+ p_t = prob * targets + (1 - prob) * (1 - targets)
+ loss = ce_loss * ((1 - p_t) ** gamma)
+
+ if alpha >= 0:
+ alpha_t = alpha * targets + (1 - alpha) * (1 - targets)
+ loss = alpha_t * loss
+
+ return loss.mean(1).sum() / num_masks
+
+def calculate_uncertainty(logits):
+ """
+ We estimate uncerainty as L1 distance between 0.0 and the logit prediction in 'logits' for the
+ foreground class in `classes`.
+ Args:
+ logits (Tensor): A tensor of shape (R, 1, ...) for class-specific or
+ class-agnostic, where R is the total number of predicted masks in all images and C is
+ the number of foreground classes. The values are logits.
+ Returns:
+ scores (Tensor): A tensor of shape (R, 1, ...) that contains uncertainty scores with
+ the most uncertain locations having the highest uncertainty score.
+ """
+ assert logits.shape[1] == 1
+ gt_class_logits = logits.clone()
+ return -(torch.abs(gt_class_logits))
+
+
+class SetCriterion(nn.Module):
+ """This class computes the loss for DETR.
+ The process happens in two steps:
+ 1) we compute hungarian assignment between ground truth boxes and the outputs of the model
+ 2) we supervise each pair of matched ground-truth / prediction (supervise class and box)
+ """
+
+ def __init__(self, num_classes, matcher, weight_dict, eos_coef, losses,
+ num_points, oversample_ratio, importance_sample_ratio, device):
+ """Create the criterion.
+ Parameters:
+ num_classes: number of object categories, omitting the special no-object category
+ matcher: module able to compute a matching between targets and proposals
+ weight_dict: dict containing as key the names of the losses and as values their relative weight.
+ eos_coef: relative classification weight applied to the no-object category
+ losses: list of all the losses to be applied. See get_loss for list of available losses.
+ """
+ super().__init__()
+ self.num_classes = num_classes
+ self.matcher = matcher
+ self.weight_dict = weight_dict
+ self.eos_coef = eos_coef
+ self.losses = losses
+ self.device = device
+ empty_weight = torch.ones(self.num_classes + 1).to(device)
+ empty_weight[-1] = self.eos_coef
+ self.register_buffer("empty_weight", empty_weight)
+
+ # pointwise mask loss parameters
+ self.num_points = num_points
+ self.oversample_ratio = oversample_ratio
+ self.importance_sample_ratio = importance_sample_ratio
+
+ def loss_labels(self, outputs, targets, indices, num_masks):
+ """Classification loss (NLL)
+ targets dicts must contain the key "labels" containing a tensor of dim [nb_target_boxes]
+ """
+ assert "pred_logits" in outputs
+ src_logits = outputs["pred_logits"].float()
+
+ idx = self._get_src_permutation_idx(indices)
+ target_classes_o = torch.cat([t["labels"][J] for t, (_, J) in zip(targets, indices)]).to(self.device)
+ target_classes = torch.full(src_logits.shape[:2], 0, dtype=torch.int64, device=src_logits.device)
+ target_classes[idx] = target_classes_o
+
+ loss_ce = F.cross_entropy(src_logits.transpose(1, 2), target_classes, self.empty_weight)
+ losses = {"loss_ce": loss_ce}
+ return losses
+
+ def loss_masks(self, outputs, targets, indices, num_masks):
+ """Compute the losses related to the masks: the focal loss and the dice loss.
+ targets dicts must contain the key "masks" containing a tensor of dim [nb_target_boxes, h, w]
+ """
+ assert "pred_masks" in outputs
+
+ src_idx = self._get_src_permutation_idx(indices)
+ tgt_idx = self._get_tgt_permutation_idx(indices)
+ src_masks = outputs["pred_masks"] #
+ src_masks = src_masks[src_idx]
+ masks = [t["masks"] for t in targets]
+ # TODO use valid to mask invalid areas due to padding in loss
+ target_masks, valid = nested_tensor_from_tensor_list(masks).decompose()
+ target_masks = target_masks.to(src_masks)
+ target_masks = target_masks[tgt_idx]
+
+ # ===================================================================================
+ # No need to upsample predictions as we are using normalized coordinates :)
+ # N x 1 x H x W
+
+ # src_masks = src_masks[:, None]
+ # target_masks = target_masks[:, None]
+
+ # with torch.no_grad():
+ # # sample point_coords
+ # point_coords = get_uncertain_point_coords_with_randomness(
+ # src_masks,
+ # lambda logits: calculate_uncertainty(logits),
+ # self.num_points,
+ # self.oversample_ratio,
+ # self.importance_sample_ratio,
+ # )
+ # # get gt labels
+ # point_labels = point_sample(
+ # target_masks,
+ # point_coords,
+ # align_corners=False,
+ # ).squeeze(1)
+
+ # point_logits = point_sample(
+ # src_masks,
+ # point_coords,
+ # align_corners=False,
+ # ).squeeze(1)
+ # ===================================================================================
+ point_logits = src_masks.flatten(1)
+ point_labels = target_masks.flatten(1)
+
+ losses = {
+ "loss_mask": sigmoid_ce_loss(point_logits, point_labels, num_masks), # sigmoid_focal_loss(point_logits, point_labels, num_masks), #
+ "loss_dice": dice_loss(point_logits, point_labels, num_masks)
+ }
+
+ del src_masks
+ del target_masks
+ return losses
+
+ def _get_src_permutation_idx(self, indices):
+ # permute predictions following indices
+ batch_idx = torch.cat([torch.full_like(src, i) for i, (src, _) in enumerate(indices)])
+ src_idx = torch.cat([src for (src, _) in indices])
+ return batch_idx, src_idx
+
+ def _get_tgt_permutation_idx(self, indices):
+ # permute targets following indices
+ batch_idx = torch.cat([torch.full_like(tgt, i) for i, (_, tgt) in enumerate(indices)])
+ tgt_idx = torch.cat([tgt for (_, tgt) in indices])
+ return batch_idx, tgt_idx
+
+ def _get_binary_mask(self, target):
+ y, x = target.size()
+ target_onehot = torch.zeros(self.num_classes + 1, y, x).to(target.device)
+ target_onehot = target_onehot.scatter(dim=0, index=target.unsqueeze(0), value=1)
+ return target_onehot
+
+ def get_loss(self, loss, outputs, targets, indices, num_masks):
+ loss_map = {
+ 'labels': self.loss_labels,
+ 'masks': self.loss_masks,
+ }
+ assert loss in loss_map, f"do you really want to compute {loss} loss?"
+ return loss_map[loss](outputs, targets, indices, num_masks)
+
+ def forward(self, outputs, gt_masks):
+ """This performs the loss computation.
+ Parameters:
+ outputs: dict of tensors, see the output specification of the model for the format
+ gt_masks: [bs, h_net_output, w_net_output]
+ """
+ outputs_without_aux = {k: v for k, v in outputs.items() if k != "aux_outputs"}
+ targets = self._get_targets(gt_masks)
+ # Retrieve the matching between the outputs of the last layer and the targets
+ indices = self.matcher(outputs_without_aux, targets)
+
+ # Compute the average number of target boxes accross all nodes, for normalization purposes
+ num_masks = sum(len(t["labels"]) for t in targets)
+ num_masks = torch.as_tensor([num_masks], dtype=torch.float, device=next(iter(outputs.values())).device)
+ if is_dist_avail_and_initialized():
+ torch.distributed.all_reduce(num_masks)
+ num_masks = torch.clamp(num_masks / get_world_size(), min=1).item()
+
+ # Compute all the requested losses
+ losses = {}
+ for loss in self.losses:
+ losses.update(self.get_loss(loss, outputs, targets, indices, num_masks))
+
+ # In case of auxiliary losses, we repeat this process with the output of each intermediate layer.
+ if "aux_outputs" in outputs:
+ for i, aux_outputs in enumerate(outputs["aux_outputs"]):
+ indices = self.matcher(aux_outputs, targets)
+ for loss in self.losses:
+ l_dict = self.get_loss(loss, aux_outputs, targets, indices, num_masks)
+ l_dict = {k + f"_{i}": v for k, v in l_dict.items()}
+ losses.update(l_dict)
+
+ return losses
+
+ def _get_targets(self, gt_masks):
+ targets = []
+ for mask in gt_masks:
+ binary_masks = self._get_binary_mask(mask)
+ cls_label = torch.unique(mask)
+ labels = cls_label[1:]
+ binary_masks = binary_masks[labels]
+ targets.append({'masks': binary_masks, 'labels': labels})
+ return targets
+
+ def __repr__(self):
+ head = "Criterion " + self.__class__.__name__
+ body = [
+ "matcher: {}".format(self.matcher.__repr__(_repr_indent=8)),
+ "losses: {}".format(self.losses),
+ "weight_dict: {}".format(self.weight_dict),
+ "num_classes: {}".format(self.num_classes),
+ "eos_coef: {}".format(self.eos_coef),
+ "num_points: {}".format(self.num_points),
+ "oversample_ratio: {}".format(self.oversample_ratio),
+ "importance_sample_ratio: {}".format(self.importance_sample_ratio),
+ ]
+ _repr_indent = 4
+ lines = [head] + [" " * _repr_indent + line for line in body]
+ return "\n".join(lines)
+
+
+class Criterion(object):
+ def __init__(self, num_classes, alpha=0.5, gamma=2, weight=None, ignore_index=0):
+ self.num_classes = num_classes
+ self.alpha = alpha
+ self.gamma = gamma
+ self.weight = weight
+ self.ignore_index = ignore_index
+ self.smooth = 1e-5
+ self.ce_fn = nn.CrossEntropyLoss(weight=self.weight, ignore_index=self.ignore_index, reduction='none')
+
+ def get_loss(self, outputs, gt_masks):
+ """This performs the loss computation.
+ Parameters:
+ outputs: dict of tensors, see the output specification of the model for the format
+ gt_masks: [bs, h_net_output, w_net_output]
+ """
+ loss_labels = 0.0
+ loss_masks = 0.0
+ loss_dices = 0.0
+ num = gt_masks.shape[0]
+ pred_logits = [outputs["pred_logits"].float()] # [bs, num_query, num_classes + 1]
+ pred_masks = [outputs['pred_masks'].float()] # [bs, num_query, h, w]
+ targets = self._get_targets(gt_masks, pred_logits[0].shape[1], pred_logits[0].device)
+ for aux_output in outputs['aux_outputs']:
+ pred_logits.append(aux_output["pred_logits"].float())
+ pred_masks.append(aux_output["pred_masks"].float())
+
+ gt_label = targets['labels'] # [bs, num_query]
+ gt_mask_list = targets['masks']
+ for mask_cls, pred_mask in zip(pred_logits, pred_masks):
+ loss_labels += F.cross_entropy(mask_cls.transpose(1, 2), gt_label)
+ # loss_masks += self.focal_loss(pred_result, gt_masks.to(pred_result.device))
+ loss_dices += self.dice_loss(pred_mask, gt_mask_list)
+
+ return loss_labels/num, loss_dices/num
+
+ def binary_dice_loss(self, inputs, targets):
+ inputs = inputs.sigmoid()
+ inputs = inputs.flatten(1)
+ targets = targets.flatten(1)
+ numerator = 2 * torch.einsum("nc,mc->nm", inputs, targets)
+ denominator = inputs.sum(-1)[:, None] + targets.sum(-1)[None, :]
+ loss = 1 - (numerator + 1) / (denominator + 1)
+ return loss.mean()
+
+ def dice_loss(self, predict, targets):
+ bs = predict.shape[0]
+ total_loss = 0
+ for i in range(bs):
+ pred_mask = predict[i]
+ tgt_mask = targets[i].to(predict.device)
+ dice_loss_value = self.binary_dice_loss(pred_mask, tgt_mask)
+ total_loss += dice_loss_value
+ return total_loss/bs
+
+ def focal_loss(self, preds, labels):
+ """
+ preds: [bs, num_class + 1, h, w]
+ labels: [bs, h, w]
+ """
+ logpt = -self.ce_fn(preds, labels)
+ pt = torch.exp(logpt)
+ loss = -((1 - pt) ** self.gamma) * self.alpha * logpt
+ return loss.mean()
+
+ def _get_binary_mask(self, target):
+ y, x = target.size()
+ target_onehot = torch.zeros(self.num_classes + 1, y, x)
+ target_onehot = target_onehot.scatter(dim=0, index=target.unsqueeze(0), value=1)
+ return target_onehot
+
+ def _get_targets(self, gt_masks, num_query, device):
+ binary_masks = []
+ gt_labels = []
+ for mask in gt_masks:
+ mask_onehot = self._get_binary_mask(mask)
+ cls_label = torch.unique(mask)
+ labels = torch.full((num_query,), 0, dtype=torch.int64, device=gt_masks.device)
+ labels[:len(cls_label)] = cls_label
+ binary_masks.append(mask_onehot[cls_label])
+ gt_labels.append(labels)
+ return {"labels": torch.stack(gt_labels).to(device), "masks": binary_masks}
diff --git a/mask2former_utils/matcher.py b/mask2former_utils/matcher.py
new file mode 100644
index 0000000000000000000000000000000000000000..fba4c34d911d7c35918c7cb92f7f308ef6bd12bc
--- /dev/null
+++ b/mask2former_utils/matcher.py
@@ -0,0 +1,217 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/models/matcher.py
+"""
+Modules to compute the matching cost and solve the corresponding LSAP.
+"""
+import torch
+import torch.nn.functional as F
+from scipy.optimize import linear_sum_assignment
+from torch import nn
+from torch.cuda.amp import autocast
+
+from .point_features import point_sample
+
+
+
+def batch_dice_loss(inputs: torch.Tensor, targets: torch.Tensor):
+ """
+ Compute the DICE loss, similar to generalized IOU for masks
+ Args:
+ inputs: A float tensor of arbitrary shape.
+ The predictions for each example.
+ targets: A float tensor with the same shape as inputs. Stores the binary
+ classification label for each element in inputs
+ (0 for the negative class and 1 for the positive class).
+ """
+ inputs = inputs.sigmoid()
+ inputs = inputs.flatten(1)
+ numerator = 2 * torch.einsum("nc,mc->nm", inputs, targets)
+ denominator = inputs.sum(-1)[:, None] + targets.sum(-1)[None, :]
+ loss = 1 - (numerator + 1) / (denominator + 1)
+ return loss
+
+
+def batch_sigmoid_ce_loss(inputs: torch.Tensor, targets: torch.Tensor):
+ """
+ Args:
+ inputs: A float tensor of arbitrary shape.
+ The predictions for each example.
+ targets: A float tensor with the same shape as inputs. Stores the binary
+ classification label for each element in inputs
+ (0 for the negative class and 1 for the positive class).
+ Returns:
+ Loss tensor
+ """
+ hw = inputs.shape[1]
+
+ pos = F.binary_cross_entropy_with_logits(
+ inputs, torch.ones_like(inputs), reduction="none"
+ )
+ neg = F.binary_cross_entropy_with_logits(
+ inputs, torch.zeros_like(inputs), reduction="none"
+ )
+
+ loss = torch.einsum("nc,mc->nm", pos, targets) + torch.einsum("nc,mc->nm", neg, (1 - targets)
+ )
+
+ return loss / hw
+
+def batch_sigmoid_focal_loss(inputs, targets, alpha: float = 0.25, gamma: float = 2):
+ """
+ Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.
+ Args:
+ inputs: A float tensor of arbitrary shape.
+ The predictions for each example.
+ targets: A float tensor with the same shape as inputs. Stores the binary
+ classification label for each element in inputs
+ (0 for the negative class and 1 for the positive class).
+ alpha: (optional) Weighting factor in range (0,1) to balance
+ positive vs negative examples. Default = -1 (no weighting).
+ gamma: Exponent of the modulating factor (1 - p_t) to
+ balance easy vs hard examples.
+ Returns:
+ Loss tensor
+ """
+ hw = inputs.shape[1]
+
+ prob = inputs.sigmoid()
+ focal_pos = ((1 - prob) ** gamma) * F.binary_cross_entropy_with_logits(
+ inputs, torch.ones_like(inputs), reduction="none"
+ )
+ focal_neg = (prob ** gamma) * F.binary_cross_entropy_with_logits(
+ inputs, torch.zeros_like(inputs), reduction="none"
+ )
+ if alpha >= 0:
+ focal_pos = focal_pos * alpha
+ focal_neg = focal_neg * (1 - alpha)
+
+ loss = torch.einsum("nc,mc->nm", focal_pos, targets) + torch.einsum("nc,mc->nm", focal_neg, (1 - targets))
+
+ return loss / hw
+
+
+class HungarianMatcher(nn.Module):
+ """This class computes an assignment between the targets and the predictions of the network
+ For efficiency reasons, the targets don't include the no_object. Because of this, in general,
+ there are more predictions than targets. In this case, we do a 1-to-1 matching of the best predictions,
+ while the others are un-matched (and thus treated as non-objects).
+ """
+
+ def __init__(self, cost_class: float = 1, cost_mask: float = 1, cost_dice: float = 1, num_points: int = 0):
+ """Creates the matcher
+
+ Params:
+ cost_class: This is the relative weight of the classification error in the matching cost
+ cost_mask: This is the relative weight of the focal loss of the binary mask in the matching cost
+ cost_dice: This is the relative weight of the dice loss of the binary mask in the matching cost
+ """
+ super().__init__()
+ self.cost_class = cost_class
+ self.cost_mask = cost_mask
+ self.cost_dice = cost_dice
+
+ assert cost_class != 0 or cost_mask != 0 or cost_dice != 0, "all costs cant be 0"
+
+ self.num_points = num_points
+
+ @torch.no_grad()
+ def memory_efficient_forward(self, outputs, targets):
+ """More memory-friendly matching"""
+ bs, num_queries = outputs["pred_logits"].shape[:2]
+
+ indices = []
+
+ # Iterate through batch size
+ for b in range(bs):
+ out_prob = outputs["pred_logits"][b].softmax(-1) # [num_queries, num_classes+1]
+ out_mask = outputs["pred_masks"][b] # [num_queries, H_pred, W_pred]
+
+ tgt_ids = targets[b]["labels"] # [1,2,3, ……]
+ tgt_mask = targets[b]["masks"].to(out_mask) # [c, h, w] c = len(tgt_ids)
+
+ # Compute the classification cost. Contrary to the loss, we don't use the NLL,
+ # but approximate it in 1 - proba[target class].
+ # The 1 is a constant that doesn't change the matching, it can be ommitted.
+ cost_class = -out_prob[:, tgt_ids] # [num_queries, num_total_targets]
+
+ #===========================Mask2Former方式====================================#
+ # out_mask = out_mask[:, None] # [num_queries, 1, H_pred, W_pred]
+ # tgt_mask = tgt_mask[:, None] # [c, 1, h, w]
+
+ # # all masks share the same set of points for efficient matching!
+ # point_coords = torch.rand(1, self.num_points, 2, device=out_mask.device)
+ # # get gt labels
+ # tgt_mask = point_sample(
+ # tgt_mask, # [c, 1, h, w]
+ # point_coords.repeat(tgt_mask.shape[0], 1, 1), # [c, self.num_points, 2]
+ # align_corners=False,
+ # ).squeeze(1) # [c, self.num_points]
+
+ # out_mask = point_sample(
+ # out_mask,
+ # point_coords.repeat(out_mask.shape[0], 1, 1),
+ # align_corners=False,
+ # ).squeeze(1) # [num_queries, self.num_points]
+ #===========================end====================================#
+
+ #===========================MaskFormer方式====================================#
+ # Flatten spatial dimension
+ out_mask = out_mask.flatten(1) # [num_queries, H*W]
+ tgt_mask = tgt_mask.flatten(1) # [num_total_targets, H*W]
+
+ with autocast(enabled=False):
+ out_mask = out_mask.float()
+ tgt_mask = tgt_mask.float()
+ # Compute the focal loss between masks
+ cost_mask = batch_sigmoid_focal_loss(out_mask, tgt_mask)
+
+ # Compute the dice loss betwen masks
+ cost_dice = batch_dice_loss(out_mask, tgt_mask)
+
+ # Final cost matrix
+ C = (
+ self.cost_mask * cost_mask
+ + self.cost_class * cost_class
+ + self.cost_dice * cost_dice
+ )
+ C = C.reshape(num_queries, -1).cpu() # [num_queries, num_total_targets]
+
+ indices.append(linear_sum_assignment(C))
+
+ return [
+ (torch.as_tensor(i, dtype=torch.int64), torch.as_tensor(j, dtype=torch.int64))
+ for i, j in indices
+ ]
+
+ @torch.no_grad()
+ def forward(self, outputs, targets):
+ """Performs the matching
+
+ Params:
+ outputs: This is a dict that contains at least these entries:
+ "pred_logits": Tensor of dim [batch_size, num_queries, num_classes] with the classification logits
+ "pred_masks": Tensor of dim [batch_size, num_queries, H_pred, W_pred] with the predicted masks
+
+ targets: This is a list of targets (len(targets) = batch_size), where each target is a dict containing:
+ "labels": Tensor of dim [num_target_boxes] (where num_target_boxes is the number of ground-truth
+ objects in the target) containing the class labels
+ "masks": Tensor of dim [num_target_boxes, H_gt, W_gt] containing the target masks
+
+ Returns:
+ A list of size batch_size, containing tuples of (index_i, index_j) where:
+ - index_i is the indices of the selected predictions (in order)
+ - index_j is the indices of the corresponding selected targets (in order)
+ For each batch element, it holds:
+ len(index_i) = len(index_j) = min(num_queries, num_target_boxes)
+ """
+ return self.memory_efficient_forward(outputs, targets)
+
+ def __repr__(self, _repr_indent=4):
+ head = "Matcher " + self.__class__.__name__
+ body = [
+ "cost_class: {}".format(self.cost_class),
+ "cost_mask: {}".format(self.cost_mask),
+ "cost_dice: {}".format(self.cost_dice),
+ ]
+ lines = [head] + [" " * _repr_indent + line for line in body]
+ return "\n".join(lines)
diff --git a/mask2former_utils/misc.py b/mask2former_utils/misc.py
new file mode 100644
index 0000000000000000000000000000000000000000..aecec3e1edbc509f1bd603890ce1d87e488af12a
--- /dev/null
+++ b/mask2former_utils/misc.py
@@ -0,0 +1,252 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/util/misc.py
+"""
+Misc functions, including distributed helpers.
+
+Mostly copy-paste from torchvision references.
+"""
+from typing import List, Optional
+from collections import OrderedDict
+from scipy.io import loadmat
+import numpy as np
+import csv
+from PIL import Image
+import matplotlib.pyplot as plt
+import torch
+import torch.distributed as dist
+import torchvision
+from torch import Tensor
+
+
+def _max_by_axis(the_list):
+ # type: (List[List[int]]) -> List[int]
+ maxes = the_list[0]
+ for sublist in the_list[1:]:
+ for index, item in enumerate(sublist):
+ maxes[index] = max(maxes[index], item)
+ return maxes
+
+def get_world_size() -> int:
+ if not dist.is_available():
+ return 1
+ if not dist.is_initialized():
+ return 1
+ return dist.get_world_size()
+
+def reduce_dict(input_dict, average=True):
+ """
+ Args:
+ input_dict (dict): all the values will be reduced
+ average (bool): whether to do average or sum
+ Reduce the values in the dictionary from all processes so that all processes
+ have the averaged results. Returns a dict with the same fields as
+ input_dict, after reduction.
+ """
+ world_size = get_world_size()
+ if world_size < 2:
+ return input_dict
+ with torch.no_grad():
+ names = []
+ values = []
+ # sort the keys so that they are consistent across processes
+ for k in sorted(input_dict.keys()):
+ names.append(k)
+ values.append(input_dict[k])
+ values = torch.stack(values, dim=0)
+ dist.all_reduce(values)
+ if average:
+ values /= world_size
+ reduced_dict = {k: v for k, v in zip(names, values)}
+ return reduced_dict
+
+class NestedTensor(object):
+ def __init__(self, tensors, mask: Optional[Tensor]):
+ self.tensors = tensors
+ self.mask = mask
+
+ def to(self, device):
+ # type: (Device) -> NestedTensor # noqa
+ cast_tensor = self.tensors.to(device)
+ mask = self.mask
+ if mask is not None:
+ assert mask is not None
+ cast_mask = mask.to(device)
+ else:
+ cast_mask = None
+ return NestedTensor(cast_tensor, cast_mask)
+
+ def decompose(self):
+ return self.tensors, self.mask
+
+ def __repr__(self):
+ return str(self.tensors)
+
+def nested_tensor_from_tensor_list(tensor_list: List[Tensor]):
+ # TODO make this more general
+ if tensor_list[0].ndim == 3:
+ if torchvision._is_tracing():
+ # nested_tensor_from_tensor_list() does not export well to ONNX
+ # call _onnx_nested_tensor_from_tensor_list() instead
+ return _onnx_nested_tensor_from_tensor_list(tensor_list)
+
+ # TODO make it support different-sized images
+ max_size = _max_by_axis([list(img.shape) for img in tensor_list])
+ # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))
+ batch_shape = [len(tensor_list)] + max_size
+ b, c, h, w = batch_shape
+ dtype = tensor_list[0].dtype
+ device = tensor_list[0].device
+ tensor = torch.zeros(batch_shape, dtype=dtype, device=device)
+ mask = torch.ones((b, h, w), dtype=torch.bool, device=device)
+ for img, pad_img, m in zip(tensor_list, tensor, mask):
+ pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
+ m[: img.shape[1], : img.shape[2]] = False
+ else:
+ raise ValueError("not supported")
+ return NestedTensor(tensor, mask)
+
+# _onnx_nested_tensor_from_tensor_list() is an implementation of
+# nested_tensor_from_tensor_list() that is supported by ONNX tracing.
+@torch.jit.unused
+def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor:
+ max_size = []
+ for i in range(tensor_list[0].dim()):
+ max_size_i = torch.max(
+ torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32)
+ ).to(torch.int64)
+ max_size.append(max_size_i)
+ max_size = tuple(max_size)
+
+ # work around for
+ # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
+ # m[: img.shape[1], :img.shape[2]] = False
+ # which is not yet supported in onnx
+ padded_imgs = []
+ padded_masks = []
+ for img in tensor_list:
+ padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]
+ padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))
+ padded_imgs.append(padded_img)
+
+ m = torch.zeros_like(img[0], dtype=torch.int, device=img.device)
+ padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1)
+ padded_masks.append(padded_mask.to(torch.bool))
+
+ tensor = torch.stack(padded_imgs)
+ mask = torch.stack(padded_masks)
+
+ return NestedTensor(tensor, mask=mask)
+
+def is_dist_avail_and_initialized():
+ if not dist.is_available():
+ return False
+ if not dist.is_initialized():
+ return False
+ return True
+
+def load_parallal_model(model, state_dict_):
+ state_dict = OrderedDict()
+ for key in state_dict_:
+ if key.startswith('module') and not key.startswith('module_list'):
+ state_dict[key[7:]] = state_dict_[key]
+ else:
+ state_dict[key] = state_dict_[key]
+
+ # check loaded parameters and created model parameters
+ model_state_dict = model.state_dict()
+ for key in state_dict:
+ if key in model_state_dict:
+ if state_dict[key].shape != model_state_dict[key].shape:
+ print('Skip loading parameter {}, required shape{}, loaded shape{}.'.format(
+ key, model_state_dict[key].shape, state_dict[key].shape))
+ state_dict[key] = model_state_dict[key]
+ else:
+ print('Drop parameter {}.'.format(key))
+ for key in model_state_dict:
+ if key not in state_dict:
+ print('No param {}.'.format(key))
+ state_dict[key] = model_state_dict[key]
+ model.load_state_dict(state_dict, strict=False)
+
+ return model
+
+class ADEVisualize(object):
+ def __init__(self):
+ self.colors = loadmat('dataset/color150.mat')['colors']
+ self.names = {}
+ with open('dataset/object150_info.csv') as f:
+ reader = csv.reader(f)
+ next(reader)
+ for row in reader:
+ self.names[int(row[0])] = row[5].split(";")[0]
+
+ def unique(self, ar, return_index=False, return_inverse=False, return_counts=False):
+ ar = np.asanyarray(ar).flatten()
+
+ optional_indices = return_index or return_inverse
+ optional_returns = optional_indices or return_counts
+
+ if ar.size == 0:
+ if not optional_returns:
+ ret = ar
+ else:
+ ret = (ar,)
+ if return_index:
+ ret += (np.empty(0, np.bool),)
+ if return_inverse:
+ ret += (np.empty(0, np.bool),)
+ if return_counts:
+ ret += (np.empty(0, np.intp),)
+ return ret
+ if optional_indices:
+ perm = ar.argsort(kind='mergesort' if return_index else 'quicksort')
+ aux = ar[perm]
+ else:
+ ar.sort()
+ aux = ar
+ flag = np.concatenate(([True], aux[1:] != aux[:-1]))
+
+ if not optional_returns:
+ ret = aux[flag]
+ else:
+ ret = (aux[flag],)
+ if return_index:
+ ret += (perm[flag],)
+ if return_inverse:
+ iflag = np.cumsum(flag) - 1
+ inv_idx = np.empty(ar.shape, dtype=np.intp)
+ inv_idx[perm] = iflag
+ ret += (inv_idx,)
+ if return_counts:
+ idx = np.concatenate(np.nonzero(flag) + ([ar.size],))
+ ret += (np.diff(idx),)
+ return ret
+
+ def colorEncode(self, labelmap, colors, mode='RGB'):
+ labelmap = labelmap.astype('int')
+ labelmap_rgb = np.zeros((labelmap.shape[0], labelmap.shape[1], 3),
+ dtype=np.uint8)
+ for label in self.unique(labelmap):
+ if label < 0:
+ continue
+ labelmap_rgb += (labelmap == label)[:, :, np.newaxis] * \
+ np.tile(colors[label],
+ (labelmap.shape[0], labelmap.shape[1], 1))
+
+ if mode == 'BGR':
+ return labelmap_rgb[:, :, ::-1]
+ else:
+ return labelmap_rgb
+
+ def show_result(self, img, pred, save_path=None):
+ pred = np.int32(pred)
+ # colorize prediction
+ pred_color = self.colorEncode(pred, self.colors)
+ pil_img = img.convert('RGBA')
+ pred_color = Image.fromarray(pred_color).convert('RGBA')
+ im_vis = Image.blend(pil_img, pred_color, 0.6)
+ if save_path is not None:
+ im_vis.save(save_path)
+ # Image.fromarray(im_vis).save(save_path)
+ else:
+ plt.imshow(im_vis)
\ No newline at end of file
diff --git a/mask2former_utils/point_features.py b/mask2former_utils/point_features.py
new file mode 100644
index 0000000000000000000000000000000000000000..55a1538b0f60c4b35bf09b404667a4ee25a988eb
--- /dev/null
+++ b/mask2former_utils/point_features.py
@@ -0,0 +1,109 @@
+import torch
+from torch.nn import functional as F
+
+def point_sample(input, point_coords, **kwargs):
+ """
+ A wrapper around :function:`torch.nn.functional.grid_sample` to support 3D point_coords tensors.
+ Unlike :function:`torch.nn.functional.grid_sample` it assumes `point_coords` to lie inside
+ [0, 1] x [0, 1] square.
+
+ Args:
+ input (Tensor): A tensor of shape (N, C, H, W) that contains features map on a H x W grid.
+ point_coords (Tensor): A tensor of shape (N, P, 2) or (N, Hgrid, Wgrid, 2) that contains
+ [0, 1] x [0, 1] normalized point coordinates.
+
+ Returns:
+ output (Tensor): A tensor of shape (N, C, P) or (N, C, Hgrid, Wgrid) that contains
+ features for points in `point_coords`. The features are obtained via bilinear
+ interplation from `input` the same way as :function:`torch.nn.functional.grid_sample`.
+ """
+ add_dim = False
+ if point_coords.dim() == 3:
+ add_dim = True
+ point_coords = point_coords.unsqueeze(2) # [c, self.num_points, 1, 2]
+ output = F.grid_sample(input, 2.0 * point_coords - 1.0, **kwargs) # [c, 1, self.num_points, 1]
+ if add_dim:
+ output = output.squeeze(3)
+ return output # [c, 1, self.num_points]
+
+def get_uncertain_point_coords_with_randomness(
+ coarse_logits, uncertainty_func, num_points, oversample_ratio, importance_sample_ratio
+):
+ """
+ Sample points in [0, 1] x [0, 1] coordinate space based on their uncertainty. The unceratinties
+ are calculated for each point using 'uncertainty_func' function that takes point's logit
+ prediction as input.
+ See PointRend paper for details.
+
+ Args:
+ coarse_logits (Tensor): A tensor of shape (N, C, Hmask, Wmask) or (N, 1, Hmask, Wmask) for
+ class-specific or class-agnostic prediction.
+ uncertainty_func: A function that takes a Tensor of shape (N, C, P) or (N, 1, P) that
+ contains logit predictions for P points and returns their uncertainties as a Tensor of
+ shape (N, 1, P).
+ num_points (int): The number of points P to sample.
+ oversample_ratio (int): Oversampling parameter.
+ importance_sample_ratio (float): Ratio of points that are sampled via importnace sampling.
+
+ Returns:
+ point_coords (Tensor): A tensor of shape (N, P, 2) that contains the coordinates of P
+ sampled points.
+ """
+ assert oversample_ratio >= 1
+ assert importance_sample_ratio <= 1 and importance_sample_ratio >= 0
+ num_boxes = coarse_logits.shape[0]
+ num_sampled = int(num_points * oversample_ratio)
+ point_coords = torch.rand(num_boxes, num_sampled, 2, device=coarse_logits.device)
+ point_logits = point_sample(coarse_logits, point_coords, align_corners=False)
+ # It is crucial to calculate uncertainty based on the sampled prediction value for the points.
+ # Calculating uncertainties of the coarse predictions first and sampling them for points leads
+ # to incorrect results.
+ # To illustrate this: assume uncertainty_func(logits)=-abs(logits), a sampled point between
+ # two coarse predictions with -1 and 1 logits has 0 logits, and therefore 0 uncertainty value.
+ # However, if we calculate uncertainties for the coarse predictions first,
+ # both will have -1 uncertainty, and the sampled point will get -1 uncertainty.
+ point_uncertainties = uncertainty_func(point_logits)
+ num_uncertain_points = int(importance_sample_ratio * num_points)
+ num_random_points = num_points - num_uncertain_points
+ idx = torch.topk(point_uncertainties[:, 0, :], k=num_uncertain_points, dim=1)[1]
+ shift = num_sampled * torch.arange(num_boxes, dtype=torch.long, device=coarse_logits.device)
+ idx += shift[:, None]
+ point_coords = point_coords.view(-1, 2)[idx.view(-1), :].view(
+ num_boxes, num_uncertain_points, 2
+ )
+ if num_random_points > 0:
+ point_coords = torch.cat(
+ [
+ point_coords,
+ torch.rand(num_boxes, num_random_points, 2, device=coarse_logits.device),
+ ],
+ dim=1,
+ )
+ return point_coords
+
+
+def get_uncertain_point_coords_on_grid(uncertainty_map, num_points):
+ """
+ Find `num_points` most uncertain points from `uncertainty_map` grid.
+
+ Args:
+ uncertainty_map (Tensor): A tensor of shape (N, 1, H, W) that contains uncertainty
+ values for a set of points on a regular H x W grid.
+ num_points (int): The number of points P to select.
+
+ Returns:
+ point_indices (Tensor): A tensor of shape (N, P) that contains indices from
+ [0, H x W) of the most uncertain points.
+ point_coords (Tensor): A tensor of shape (N, P, 2) that contains [0, 1] x [0, 1] normalized
+ coordinates of the most uncertain points from the H x W grid.
+ """
+ R, _, H, W = uncertainty_map.shape
+ h_step = 1.0 / float(H)
+ w_step = 1.0 / float(W)
+
+ num_points = min(H * W, num_points)
+ point_indices = torch.topk(uncertainty_map.view(R, H * W), k=num_points, dim=1)[1]
+ point_coords = torch.zeros(R, num_points, 2, dtype=torch.float, device=uncertainty_map.device)
+ point_coords[:, :, 0] = w_step / 2.0 + (point_indices % W).to(torch.float) * w_step
+ point_coords[:, :, 1] = h_step / 2.0 + (point_indices // W).to(torch.float) * h_step
+ return point_indices, point_coords
diff --git a/mask2former_utils/solver.py b/mask2former_utils/solver.py
new file mode 100644
index 0000000000000000000000000000000000000000..a69aca83c99aa56f36b6b4471240842eed78cfc7
--- /dev/null
+++ b/mask2former_utils/solver.py
@@ -0,0 +1,108 @@
+import copy
+import itertools
+import logging
+from collections import defaultdict
+from enum import Enum
+from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Type, Union
+import torch
+from fvcore.common.param_scheduler import CosineParamScheduler, MultiStepParamScheduler
+
+from detectron2.config import CfgNode
+
+_GradientClipperInput = Union[torch.Tensor, Iterable[torch.Tensor]]
+_GradientClipper = Callable[[_GradientClipperInput], None]
+
+
+class GradientClipType(Enum):
+ VALUE = "value"
+ NORM = "norm"
+
+
+def _create_gradient_clipper(cfg: CfgNode) -> _GradientClipper:
+ """
+ Creates gradient clipping closure to clip by value or by norm,
+ according to the provided config.
+ """
+ cfg = copy.deepcopy(cfg)
+
+ def clip_grad_norm(p: _GradientClipperInput):
+ torch.nn.utils.clip_grad_norm_(p, cfg.CLIP_VALUE, cfg.NORM_TYPE)
+
+ def clip_grad_value(p: _GradientClipperInput):
+ torch.nn.utils.clip_grad_value_(p, cfg.CLIP_VALUE)
+
+ _GRADIENT_CLIP_TYPE_TO_CLIPPER = {
+ GradientClipType.VALUE: clip_grad_value,
+ GradientClipType.NORM: clip_grad_norm,
+ }
+ return _GRADIENT_CLIP_TYPE_TO_CLIPPER[GradientClipType(cfg.CLIP_TYPE)]
+
+
+def _generate_optimizer_class_with_gradient_clipping(
+ optimizer: Type[torch.optim.Optimizer],
+ *,
+ per_param_clipper: Optional[_GradientClipper] = None,
+ global_clipper: Optional[_GradientClipper] = None,
+) -> Type[torch.optim.Optimizer]:
+ """
+ Dynamically creates a new type that inherits the type of a given instance
+ and overrides the `step` method to add gradient clipping
+ """
+ assert (
+ per_param_clipper is None or global_clipper is None
+ ), "Not allowed to use both per-parameter clipping and global clipping"
+
+ def optimizer_wgc_step(self, closure=None):
+ if per_param_clipper is not None:
+ for group in self.param_groups:
+ for p in group["params"]:
+ per_param_clipper(p)
+ else:
+ # global clipper for future use with detr
+ # (https://github.com/facebookresearch/detr/pull/287)
+ all_params = itertools.chain(*[g["params"] for g in self.param_groups])
+ global_clipper(all_params)
+ super(type(self), self).step(closure)
+
+ OptimizerWithGradientClip = type(
+ optimizer.__name__ + "WithGradientClip",
+ (optimizer,),
+ {"step": optimizer_wgc_step},
+ )
+ return OptimizerWithGradientClip
+
+
+def maybe_add_gradient_clipping(
+ cfg: CfgNode, optimizer: Type[torch.optim.Optimizer]
+) -> Type[torch.optim.Optimizer]:
+ """
+ If gradient clipping is enabled through config options, wraps the existing
+ optimizer type to become a new dynamically created class OptimizerWithGradientClip
+ that inherits the given optimizer and overrides the `step` method to
+ include gradient clipping.
+
+ Args:
+ cfg: CfgNode, configuration options
+ optimizer: type. A subclass of torch.optim.Optimizer
+
+ Return:
+ type: either the input `optimizer` (if gradient clipping is disabled), or
+ a subclass of it with gradient clipping included in the `step` method.
+ """
+ if not cfg.SOLVER.CLIP_GRADIENTS.ENABLED:
+ return optimizer
+ if isinstance(optimizer, torch.optim.Optimizer):
+ optimizer_type = type(optimizer)
+ else:
+ assert issubclass(optimizer, torch.optim.Optimizer), optimizer
+ optimizer_type = optimizer
+
+ grad_clipper = _create_gradient_clipper(cfg.SOLVER.CLIP_GRADIENTS)
+ OptimizerWithGradientClip = _generate_optimizer_class_with_gradient_clipping(
+ optimizer_type, per_param_clipper=grad_clipper
+ )
+ if isinstance(optimizer, torch.optim.Optimizer):
+ optimizer.__class__ = OptimizerWithGradientClip # a bit hacky, not recommended
+ return optimizer
+ else:
+ return OptimizerWithGradientClip
\ No newline at end of file
diff --git a/mask2former_utils/summary.py b/mask2former_utils/summary.py
new file mode 100644
index 0000000000000000000000000000000000000000..7c16c7b26ce71627acb3cb4f9886cfb424540672
--- /dev/null
+++ b/mask2former_utils/summary.py
@@ -0,0 +1,125 @@
+#!/usr/bin/env python
+# -*- encoding: utf-8 -*-
+'''
+@File : summary.py
+@Time : 2022/10/15 23:38:13
+@Author : BQH
+@Version : 1.0
+@Contact : raogx.vip@hotmail.com
+@License : (C)Copyright 2017-2018, Liugroup-NLPR-CASIA
+@Desc : 运行时日志文件
+'''
+
+# here put the import lib
+
+import os
+import sys
+import torch
+import logging
+from datetime import datetime
+
+# return a fake summarywriter if tensorbaordX is not installed
+
+try:
+ from tensorboardX import SummaryWriter
+except ImportError:
+ class SummaryWriter:
+ def __init__(self, log_dir=None, comment='', **kwargs):
+ print('\nunable to import tensorboardX, log will be recorded by pytorch!\n')
+ self.log_dir = log_dir if log_dir is not None else './logs'
+ os.makedirs('./logs', exist_ok=True)
+ self.logs = {'comment': comment}
+ return
+
+ def add_scalar(self, tag, scalar_value, global_step=None, walltime=None):
+ if tag in self.logs:
+ self.logs[tag].append((scalar_value, global_step, walltime))
+ else:
+ self.logs[tag] = [(scalar_value, global_step, walltime)]
+ return
+
+ def close(self):
+ timestamp = str(datetime.now()).replace(' ', '_').replace(':', '_')
+ torch.save(self.logs, os.path.join(self.log_dir, 'log_%s.pickle' % timestamp))
+ return
+
+
+class EmptySummaryWriter:
+ def __init__(self, **kwargs):
+ pass
+
+ def add_scalar(self, tag, scalar_value, global_step=None, walltime=None):
+ pass
+
+ def close(self):
+ pass
+
+
+def create_summary(distributed_rank=0, **kwargs):
+ if distributed_rank > 0:
+ return EmptySummaryWriter(**kwargs)
+ else:
+ return SummaryWriter(**kwargs)
+
+
+def create_logger(distributed_rank=0, save_dir=None):
+ logger = logging.getLogger('logger')
+ logger.setLevel(logging.DEBUG)
+
+ filename = "log_%s.txt" % (datetime.now().strftime("%Y_%m_%d_%H_%M_%S"))
+
+ # don't log results for the non-master process
+ if distributed_rank > 0:
+ return logger
+ ch = logging.StreamHandler(stream=sys.stdout)
+ ch.setLevel(logging.DEBUG)
+ # formatter = logging.Formatter("%(asctime)s %(name)s %(levelname)s: %(message)s")
+ formatter = logging.Formatter("%(message)s [%(asctime)s]")
+ ch.setFormatter(formatter)
+ logger.addHandler(ch)
+
+ if save_dir is not None:
+ fh = logging.FileHandler(os.path.join(save_dir, filename))
+ fh.setLevel(logging.DEBUG)
+ fh.setFormatter(formatter)
+ logger.addHandler(fh)
+
+ return logger
+
+
+class Saver:
+ def __init__(self, distributed_rank, save_dir):
+ self.distributed_rank = distributed_rank
+ self.save_dir = save_dir
+ os.makedirs(self.save_dir, exist_ok=True)
+ return
+
+ def save(self, obj, save_name):
+ if self.distributed_rank == 0:
+ torch.save(obj, os.path.join(self.save_dir, save_name + '.t7'))
+ return 'checkpoint saved in %s !' % os.path.join(self.save_dir, save_name)
+ else:
+ return ''
+
+
+def create_saver(distributed_rank, save_dir):
+ return Saver(distributed_rank, save_dir)
+
+
+class DisablePrint:
+ def __init__(self, local_rank=0):
+ self.local_rank = local_rank
+
+ def __enter__(self):
+ if self.local_rank != 0:
+ self._original_stdout = sys.stdout
+ sys.stdout = open(os.devnull, 'w')
+ else:
+ pass
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ if self.local_rank != 0:
+ sys.stdout.close()
+ sys.stdout = self._original_stdout
+ else:
+ pass
\ No newline at end of file
diff --git a/modeling/MaskFormerModel.py b/modeling/MaskFormerModel.py
new file mode 100644
index 0000000000000000000000000000000000000000..b6396b37866c4ac245cc206b993aeb57b1b20871
--- /dev/null
+++ b/modeling/MaskFormerModel.py
@@ -0,0 +1,103 @@
+#!/usr/bin/env python
+# -*- encoding: utf-8 -*-
+'''
+@File : MaskFormerModel.py
+@Time : 2022/09/30 20:50:53
+@Author : BQH
+@Version : 1.0
+@Contact : raogx.vip@hotmail.com
+@License : (C)Copyright 2017-2018, Liugroup-NLPR-CASIA
+@Desc : 基于DeformTransAtten的分割网络
+'''
+
+# here put the import lib
+from torch import nn
+from addict import Dict
+
+from .backbone.resnet import ResNet, resnet_spec
+from .pixel_decoder.msdeformattn import MSDeformAttnPixelDecoder
+from .transformer_decoder.mask2former_transformer_decoder import MultiScaleMaskedTransformerDecoder
+
+
+class MaskFormerHead(nn.Module):
+ def __init__(self, cfg, input_shape):
+ super().__init__()
+ self.pixel_decoder = self.pixel_decoder_init(cfg, input_shape)
+ self.predictor = self.predictor_init(cfg)
+
+ def pixel_decoder_init(self, cfg, input_shape):
+ common_stride = cfg.MODEL.SEM_SEG_HEAD.COMMON_STRIDE
+ transformer_dropout = cfg.MODEL.MASK_FORMER.DROPOUT
+ transformer_nheads = cfg.MODEL.MASK_FORMER.NHEADS
+ transformer_dim_feedforward = 1024
+ transformer_enc_layers = cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS
+ conv_dim = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM
+ mask_dim = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM
+ transformer_in_features = cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES # ["res3", "res4", "res5"]
+
+ pixel_decoder = MSDeformAttnPixelDecoder(input_shape,
+ transformer_dropout,
+ transformer_nheads,
+ transformer_dim_feedforward,
+ transformer_enc_layers,
+ conv_dim,
+ mask_dim,
+ transformer_in_features,
+ common_stride)
+ return pixel_decoder
+
+ def predictor_init(self, cfg):
+ in_channels = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM
+ num_classes = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES
+ hidden_dim = cfg.MODEL.MASK_FORMER.HIDDEN_DIM
+ num_queries = cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES
+ nheads = cfg.MODEL.MASK_FORMER.NHEADS
+ dim_feedforward = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD
+ dec_layers = cfg.MODEL.MASK_FORMER.DEC_LAYERS - 1
+ pre_norm = cfg.MODEL.MASK_FORMER.PRE_NORM
+ mask_dim = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM
+ enforce_input_project = False
+ mask_classification = True
+ predictor = MultiScaleMaskedTransformerDecoder(in_channels,
+ num_classes,
+ mask_classification,
+ hidden_dim,
+ num_queries,
+ nheads,
+ dim_feedforward,
+ dec_layers,
+ pre_norm,
+ mask_dim,
+ enforce_input_project)
+ return predictor
+
+ def forward(self, features, mask=None):
+ mask_features, transformer_encoder_features, multi_scale_features = self.pixel_decoder.forward_features(features)
+ predictions = self.predictor(multi_scale_features, mask_features, mask)
+ return predictions, mask_features
+
+class MaskFormerModel(nn.Module):
+ def __init__(self, cfg):
+ super().__init__()
+ self.backbone = self.build_backbone(cfg)
+ self.sem_seg_head = MaskFormerHead(cfg, self.backbone_feature_shape)
+
+ def build_backbone(self, cfg):
+ model_type = cfg.MODEL.BACKBONE.TYPE
+ assert model_type == 'resnet18' or model_type == 'resnet34' or model_type == 'resnet50', 'Do not support model type!'
+
+ channels = [64, 128, 256, 512]
+ if int(model_type[6:]) > 34:
+ channels = [item * 4 for item in channels]
+
+ backbone = ResNet(resnet_spec[model_type][0], resnet_spec[model_type][1])
+ # backbone.init_weights()
+ self.backbone_feature_shape = dict()
+ for i, channel in enumerate(channels):
+ self.backbone_feature_shape[f'res{i+2}'] = Dict({'channel': channel, 'stride': 2**(i+2)})
+ return backbone
+
+ def forward(self, inputs):
+ features = self.backbone(inputs)
+ outputs = self.sem_seg_head(features)
+ return outputs
diff --git a/modeling/__init__.py b/modeling/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..b1bbc6c11478da359b31f5f21ecfaa4e82d7ff86
--- /dev/null
+++ b/modeling/__init__.py
@@ -0,0 +1,2 @@
+from .backbone.resnet import ResNet
+from .MaskFormerModel import MaskFormerModel
diff --git a/modeling/__pycache__/MaskFormerModel.cpython-37.pyc b/modeling/__pycache__/MaskFormerModel.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..fbf6886be4c37e809e40845c8b58ddeda05ede5d
Binary files /dev/null and b/modeling/__pycache__/MaskFormerModel.cpython-37.pyc differ
diff --git a/modeling/__pycache__/MaskFormerModel.cpython-38.pyc b/modeling/__pycache__/MaskFormerModel.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e04645bcb73a7ef90e9736651188a0139ca44250
Binary files /dev/null and b/modeling/__pycache__/MaskFormerModel.cpython-38.pyc differ
diff --git a/modeling/__pycache__/__init__.cpython-37.pyc b/modeling/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..6e86423b956dafd10bf176bc087c8b5ca8607d15
Binary files /dev/null and b/modeling/__pycache__/__init__.cpython-37.pyc differ
diff --git a/modeling/__pycache__/__init__.cpython-38.pyc b/modeling/__pycache__/__init__.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..871407b337d1b9500ce417bf7ee9aefeeb515efe
Binary files /dev/null and b/modeling/__pycache__/__init__.cpython-38.pyc differ
diff --git a/modeling/backbone/__pycache__/resnet.cpython-37.pyc b/modeling/backbone/__pycache__/resnet.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..331b939f3f6108da3f971287b03c3ee37752c84b
Binary files /dev/null and b/modeling/backbone/__pycache__/resnet.cpython-37.pyc differ
diff --git a/modeling/backbone/__pycache__/resnet.cpython-38.pyc b/modeling/backbone/__pycache__/resnet.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..b48d618ca3e6489d6392ac7729bcf8253e8f38c4
Binary files /dev/null and b/modeling/backbone/__pycache__/resnet.cpython-38.pyc differ
diff --git a/modeling/backbone/resnet.py b/modeling/backbone/resnet.py
new file mode 100644
index 0000000000000000000000000000000000000000..96718cb7fbdc6a1ba244b66d4f684e6521fd17d7
--- /dev/null
+++ b/modeling/backbone/resnet.py
@@ -0,0 +1,190 @@
+#!/usr/bin/env python
+# -*- encoding: utf-8 -*-
+'''
+@File : resnet.py
+@Time : 2022/04/23 14:08:10
+@Author : BQH
+@Version : 1.0
+@Contact : raogx.vip@hotmail.com
+@License : (C)Copyright 2017-2018, Liugroup-NLPR-CASIA
+@Desc : Backbone
+'''
+
+# here put the import lib
+import torch
+import torch.nn as nn
+from addict import Dict
+import torch.utils.model_zoo as model_zoo
+
+BN_MOMENTUM = 0.1
+
+model_urls = {'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
+ 'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
+ 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
+ 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
+ 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth', }
+
+
+def conv3x3(in_planes, out_planes, stride=1):
+ """3x3 convolution with padding"""
+ return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
+
+
+class InvertedResidual(nn.Module):
+ def __init__(self, in_channels, hidden_dim, out_channels=3):
+ super(InvertedResidual, self).__init__()
+
+ self.conv = nn.Sequential(
+ nn.Conv2d(in_channels, hidden_dim, kernel_size=1, stride=1, padding=0, bias=True),
+ nn.BatchNorm2d(hidden_dim, momentum=BN_MOMENTUM),
+ nn.ReLU6(inplace=True),
+
+ # dw
+ # nn.Conv2d(hidden_dim, hidden_dim, kernel_size=3, stride=1, padding=1, bias=False),
+ # nn.BatchNorm2d(hidden_dim, momentum=BN_MOMENTUM),
+ # nn.ReLU(inplace=True),
+
+ # pw-linear
+ nn.Conv2d(hidden_dim, out_channels, kernel_size=1, stride=1, padding=0, bias=False),
+ nn.BatchNorm2d(out_channels, momentum=BN_MOMENTUM),
+ nn.ReLU(inplace=True)
+ )
+
+ def forward(self, x):
+ return self.conv(x)
+
+
+class BasicBlock(nn.Module):
+ expansion = 1
+
+ def __init__(self, inplanes, planes, stride=1, downsample=None):
+ super(BasicBlock, self).__init__()
+ self.conv1 = conv3x3(inplanes, planes, stride)
+ self.bn1 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)
+ self.relu = nn.ReLU(inplace=True)
+ self.conv2 = conv3x3(planes, planes)
+ self.bn2 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)
+ self.downsample = downsample
+ self.stride = stride
+
+ def forward(self, x):
+ residual = x
+
+ out = self.conv1(x)
+ out = self.bn1(out)
+ out = self.relu(out)
+
+ out = self.conv2(out)
+ out = self.bn2(out)
+
+ if self.downsample is not None:
+ residual = self.downsample(x)
+
+ out += residual
+ out = self.relu(out)
+
+ return out
+
+
+class Bottleneck(nn.Module):
+ expansion = 4
+
+ def __init__(self, inplanes, planes, stride=1, downsample=None):
+ super(Bottleneck, self).__init__()
+ self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
+ self.bn1 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)
+ self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
+ self.bn2 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)
+ self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
+ self.bn3 = nn.BatchNorm2d(planes * self.expansion, momentum=BN_MOMENTUM)
+ self.relu = nn.ReLU(inplace=True)
+ self.downsample = downsample
+ self.stride = stride
+
+ def forward(self, x):
+ residual = x
+
+ out = self.conv1(x)
+ out = self.bn1(out)
+ out = self.relu(out)
+
+ out = self.conv2(out)
+ out = self.bn2(out)
+ out = self.relu(out)
+
+ out = self.conv3(out)
+ out = self.bn3(out)
+
+ if self.downsample is not None:
+ residual = self.downsample(x)
+
+ out += residual
+ out = self.relu(out)
+
+ return out
+
+
+class ResNet(nn.Module):
+ def __init__(self, block, layers):
+ super(ResNet, self).__init__()
+ self.inplanes = 64
+
+ self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
+ self.bn1 = nn.BatchNorm2d(64, momentum=BN_MOMENTUM)
+ self.relu = nn.ReLU(inplace=True)
+ self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
+ self.layer1 = self._make_layer(block, 64, layers[0])
+ self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
+ self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
+ self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
+
+ def _make_layer(self, block, planes, blocks, stride=1):
+ downsample = None
+ if stride != 1 or self.inplanes != planes * block.expansion:
+ downsample = nn.Sequential(nn.Conv2d(self.inplanes, planes * block.expansion,
+ kernel_size=1, stride=stride, bias=False),
+ nn.BatchNorm2d(planes * block.expansion, momentum=BN_MOMENTUM))
+
+ layers = []
+ layers.append(block(self.inplanes, planes, stride, downsample))
+ self.inplanes = planes * block.expansion
+ for i in range(1, blocks):
+ layers.append(block(self.inplanes, planes))
+ return nn.Sequential(*layers)
+
+ def forward(self, input_x):
+ out = {}
+ x = self.conv1(input_x)
+ x = self.bn1(x)
+ x = self.relu(x)
+ feature1 = self.maxpool(x)
+
+ feature2 = self.layer1(feature1)
+ out['res2'] = feature2
+
+ feature3 = self.layer2(feature2)
+ out['res3'] = feature3
+
+ feature4 = self.layer3(feature3)
+ out['res4'] = feature4
+
+ feature5 = self.layer4(feature4)
+ out['res5'] = feature5
+
+ return out
+
+ def init_weights(self, num_layers=50):
+ # url = model_urls['resnet{}'.format(num_layers)]
+ # pretrained_state_dict = model_zoo.load_url(url, model_dir='/home/code/pytorch_model/')
+ # print('=> loading pretrained model {}'.format(url))
+ pertained_model = r'/home/code/pytorch_model/resnet50-19c8e357.pth'
+ pretrained_state_dict = torch.load(pertained_model)
+
+ self.load_state_dict(pretrained_state_dict, strict=False)
+
+
+resnet_spec = {'resnet18': (BasicBlock, [2, 2, 2, 2]),
+ 'resnet34': (BasicBlock, [3, 4, 6, 3]),
+ 'resnet50': (Bottleneck, [3, 4, 6, 3]),
+ 'resnet101': (Bottleneck, [3, 4, 23, 3]),
+ 'resnet152': (Bottleneck, [3, 8, 36, 3])}
\ No newline at end of file
diff --git a/modeling/backbone/swin.py b/modeling/backbone/swin.py
new file mode 100644
index 0000000000000000000000000000000000000000..3b099d84396ac31d22881e5b6c9e53d2d0abaef3
--- /dev/null
+++ b/modeling/backbone/swin.py
@@ -0,0 +1,770 @@
+# --------------------------------------------------------
+# Swin Transformer
+# Copyright (c) 2021 Microsoft
+# Licensed under The MIT License [see LICENSE for details]
+# Written by Ze Liu, Yutong Lin, Yixuan Wei
+# --------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation/blob/main/mmseg/models/backbones/swin_transformer.py
+
+import numpy as np
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.utils.checkpoint as checkpoint
+from timm.models.layers import DropPath, to_2tuple, trunc_normal_
+
+from detectron2.modeling import BACKBONE_REGISTRY, Backbone, ShapeSpec
+
+
+class Mlp(nn.Module):
+ """Multilayer perceptron."""
+
+ def __init__(
+ self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0
+ ):
+ super().__init__()
+ out_features = out_features or in_features
+ hidden_features = hidden_features or in_features
+ self.fc1 = nn.Linear(in_features, hidden_features)
+ self.act = act_layer()
+ self.fc2 = nn.Linear(hidden_features, out_features)
+ self.drop = nn.Dropout(drop)
+
+ def forward(self, x):
+ x = self.fc1(x)
+ x = self.act(x)
+ x = self.drop(x)
+ x = self.fc2(x)
+ x = self.drop(x)
+ return x
+
+
+def window_partition(x, window_size):
+ """
+ Args:
+ x: (B, H, W, C)
+ window_size (int): window size
+ Returns:
+ windows: (num_windows*B, window_size, window_size, C)
+ """
+ B, H, W, C = x.shape
+ x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
+ windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
+ return windows
+
+
+def window_reverse(windows, window_size, H, W):
+ """
+ Args:
+ windows: (num_windows*B, window_size, window_size, C)
+ window_size (int): Window size
+ H (int): Height of image
+ W (int): Width of image
+ Returns:
+ x: (B, H, W, C)
+ """
+ B = int(windows.shape[0] / (H * W / window_size / window_size))
+ x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
+ x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
+ return x
+
+
+class WindowAttention(nn.Module):
+ """Window based multi-head self attention (W-MSA) module with relative position bias.
+ It supports both of shifted and non-shifted window.
+ Args:
+ dim (int): Number of input channels.
+ window_size (tuple[int]): The height and width of the window.
+ num_heads (int): Number of attention heads.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
+ attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
+ proj_drop (float, optional): Dropout ratio of output. Default: 0.0
+ """
+
+ def __init__(
+ self,
+ dim,
+ window_size,
+ num_heads,
+ qkv_bias=True,
+ qk_scale=None,
+ attn_drop=0.0,
+ proj_drop=0.0,
+ ):
+
+ super().__init__()
+ self.dim = dim
+ self.window_size = window_size # Wh, Ww
+ self.num_heads = num_heads
+ head_dim = dim // num_heads
+ self.scale = qk_scale or head_dim ** -0.5
+
+ # define a parameter table of relative position bias
+ self.relative_position_bias_table = nn.Parameter(
+ torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)
+ ) # 2*Wh-1 * 2*Ww-1, nH
+
+ # get pair-wise relative position index for each token inside the window
+ coords_h = torch.arange(self.window_size[0])
+ coords_w = torch.arange(self.window_size[1])
+ coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
+ coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
+ relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
+ relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
+ relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
+ relative_coords[:, :, 1] += self.window_size[1] - 1
+ relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
+ relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
+ self.register_buffer("relative_position_index", relative_position_index)
+
+ self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
+ self.attn_drop = nn.Dropout(attn_drop)
+ self.proj = nn.Linear(dim, dim)
+ self.proj_drop = nn.Dropout(proj_drop)
+
+ trunc_normal_(self.relative_position_bias_table, std=0.02)
+ self.softmax = nn.Softmax(dim=-1)
+
+ def forward(self, x, mask=None):
+ """Forward function.
+ Args:
+ x: input features with shape of (num_windows*B, N, C)
+ mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
+ """
+ B_, N, C = x.shape
+ qkv = (
+ self.qkv(x)
+ .reshape(B_, N, 3, self.num_heads, C // self.num_heads)
+ .permute(2, 0, 3, 1, 4)
+ )
+ q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
+
+ q = q * self.scale
+ attn = q @ k.transpose(-2, -1)
+
+ relative_position_bias = self.relative_position_bias_table[
+ self.relative_position_index.view(-1)
+ ].view(
+ self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1
+ ) # Wh*Ww,Wh*Ww,nH
+ relative_position_bias = relative_position_bias.permute(
+ 2, 0, 1
+ ).contiguous() # nH, Wh*Ww, Wh*Ww
+ attn = attn + relative_position_bias.unsqueeze(0)
+
+ if mask is not None:
+ nW = mask.shape[0]
+ attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
+ attn = attn.view(-1, self.num_heads, N, N)
+ attn = self.softmax(attn)
+ else:
+ attn = self.softmax(attn)
+
+ attn = self.attn_drop(attn)
+
+ x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
+ x = self.proj(x)
+ x = self.proj_drop(x)
+ return x
+
+
+class SwinTransformerBlock(nn.Module):
+ """Swin Transformer Block.
+ Args:
+ dim (int): Number of input channels.
+ num_heads (int): Number of attention heads.
+ window_size (int): Window size.
+ shift_size (int): Shift size for SW-MSA.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float, optional): Stochastic depth rate. Default: 0.0
+ act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(
+ self,
+ dim,
+ num_heads,
+ window_size=7,
+ shift_size=0,
+ mlp_ratio=4.0,
+ qkv_bias=True,
+ qk_scale=None,
+ drop=0.0,
+ attn_drop=0.0,
+ drop_path=0.0,
+ act_layer=nn.GELU,
+ norm_layer=nn.LayerNorm,
+ ):
+ super().__init__()
+ self.dim = dim
+ self.num_heads = num_heads
+ self.window_size = window_size
+ self.shift_size = shift_size
+ self.mlp_ratio = mlp_ratio
+ assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
+
+ self.norm1 = norm_layer(dim)
+ self.attn = WindowAttention(
+ dim,
+ window_size=to_2tuple(self.window_size),
+ num_heads=num_heads,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ attn_drop=attn_drop,
+ proj_drop=drop,
+ )
+
+ self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
+ self.norm2 = norm_layer(dim)
+ mlp_hidden_dim = int(dim * mlp_ratio)
+ self.mlp = Mlp(
+ in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop
+ )
+
+ self.H = None
+ self.W = None
+
+ def forward(self, x, mask_matrix):
+ """Forward function.
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ mask_matrix: Attention mask for cyclic shift.
+ """
+ B, L, C = x.shape
+ H, W = self.H, self.W
+ assert L == H * W, "input feature has wrong size"
+
+ shortcut = x
+ x = self.norm1(x)
+ x = x.view(B, H, W, C)
+
+ # pad feature maps to multiples of window size
+ pad_l = pad_t = 0
+ pad_r = (self.window_size - W % self.window_size) % self.window_size
+ pad_b = (self.window_size - H % self.window_size) % self.window_size
+ x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
+ _, Hp, Wp, _ = x.shape
+
+ # cyclic shift
+ if self.shift_size > 0:
+ shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
+ attn_mask = mask_matrix
+ else:
+ shifted_x = x
+ attn_mask = None
+
+ # partition windows
+ x_windows = window_partition(
+ shifted_x, self.window_size
+ ) # nW*B, window_size, window_size, C
+ x_windows = x_windows.view(
+ -1, self.window_size * self.window_size, C
+ ) # nW*B, window_size*window_size, C
+
+ # W-MSA/SW-MSA
+ attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
+
+ # merge windows
+ attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
+ shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
+
+ # reverse cyclic shift
+ if self.shift_size > 0:
+ x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
+ else:
+ x = shifted_x
+
+ if pad_r > 0 or pad_b > 0:
+ x = x[:, :H, :W, :].contiguous()
+
+ x = x.view(B, H * W, C)
+
+ # FFN
+ x = shortcut + self.drop_path(x)
+ x = x + self.drop_path(self.mlp(self.norm2(x)))
+
+ return x
+
+
+class PatchMerging(nn.Module):
+ """Patch Merging Layer
+ Args:
+ dim (int): Number of input channels.
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(self, dim, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.dim = dim
+ self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
+ self.norm = norm_layer(4 * dim)
+
+ def forward(self, x, H, W):
+ """Forward function.
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ """
+ B, L, C = x.shape
+ assert L == H * W, "input feature has wrong size"
+
+ x = x.view(B, H, W, C)
+
+ # padding
+ pad_input = (H % 2 == 1) or (W % 2 == 1)
+ if pad_input:
+ x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
+
+ x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
+ x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
+ x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
+ x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
+ x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
+ x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
+
+ x = self.norm(x)
+ x = self.reduction(x)
+
+ return x
+
+
+class BasicLayer(nn.Module):
+ """A basic Swin Transformer layer for one stage.
+ Args:
+ dim (int): Number of feature channels
+ depth (int): Depths of this stage.
+ num_heads (int): Number of attention head.
+ window_size (int): Local window size. Default: 7.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
+ use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
+ """
+
+ def __init__(
+ self,
+ dim,
+ depth,
+ num_heads,
+ window_size=7,
+ mlp_ratio=4.0,
+ qkv_bias=True,
+ qk_scale=None,
+ drop=0.0,
+ attn_drop=0.0,
+ drop_path=0.0,
+ norm_layer=nn.LayerNorm,
+ downsample=None,
+ use_checkpoint=False,
+ ):
+ super().__init__()
+ self.window_size = window_size
+ self.shift_size = window_size // 2
+ self.depth = depth
+ self.use_checkpoint = use_checkpoint
+
+ # build blocks
+ self.blocks = nn.ModuleList(
+ [
+ SwinTransformerBlock(
+ dim=dim,
+ num_heads=num_heads,
+ window_size=window_size,
+ shift_size=0 if (i % 2 == 0) else window_size // 2,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop=drop,
+ attn_drop=attn_drop,
+ drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
+ norm_layer=norm_layer,
+ )
+ for i in range(depth)
+ ]
+ )
+
+ # patch merging layer
+ if downsample is not None:
+ self.downsample = downsample(dim=dim, norm_layer=norm_layer)
+ else:
+ self.downsample = None
+
+ def forward(self, x, H, W):
+ """Forward function.
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ """
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (
+ slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None),
+ )
+ w_slices = (
+ slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None),
+ )
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(
+ img_mask, self.window_size
+ ) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(
+ attn_mask == 0, float(0.0)
+ )
+
+ for blk in self.blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask)
+ if self.downsample is not None:
+ x_down = self.downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x, H, W, x_down, Wh, Ww
+ else:
+ return x, H, W, x, H, W
+
+
+class PatchEmbed(nn.Module):
+ """Image to Patch Embedding
+ Args:
+ patch_size (int): Patch token size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ norm_layer (nn.Module, optional): Normalization layer. Default: None
+ """
+
+ def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
+ super().__init__()
+ patch_size = to_2tuple(patch_size)
+ self.patch_size = patch_size
+
+ self.in_chans = in_chans
+ self.embed_dim = embed_dim
+
+ self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
+ if norm_layer is not None:
+ self.norm = norm_layer(embed_dim)
+ else:
+ self.norm = None
+
+ def forward(self, x):
+ """Forward function."""
+ # padding
+ _, _, H, W = x.size()
+ if W % self.patch_size[1] != 0:
+ x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
+ if H % self.patch_size[0] != 0:
+ x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
+
+ x = self.proj(x) # B C Wh Ww
+ if self.norm is not None:
+ Wh, Ww = x.size(2), x.size(3)
+ x = x.flatten(2).transpose(1, 2)
+ x = self.norm(x)
+ x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
+
+ return x
+
+
+class SwinTransformer(nn.Module):
+ """Swin Transformer backbone.
+ A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
+ https://arxiv.org/pdf/2103.14030
+ Args:
+ pretrain_img_size (int): Input image size for training the pretrained model,
+ used in absolute postion embedding. Default 224.
+ patch_size (int | tuple(int)): Patch size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ depths (tuple[int]): Depths of each Swin Transformer stage.
+ num_heads (tuple[int]): Number of attention head of each stage.
+ window_size (int): Window size. Default: 7.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
+ qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
+ drop_rate (float): Dropout rate.
+ attn_drop_rate (float): Attention dropout rate. Default: 0.
+ drop_path_rate (float): Stochastic depth rate. Default: 0.2.
+ norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
+ ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
+ patch_norm (bool): If True, add normalization after patch embedding. Default: True.
+ out_indices (Sequence[int]): Output from which stages.
+ frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
+ -1 means not freezing any parameters.
+ use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
+ """
+
+ def __init__(
+ self,
+ pretrain_img_size=224,
+ patch_size=4,
+ in_chans=3,
+ embed_dim=96,
+ depths=[2, 2, 6, 2],
+ num_heads=[3, 6, 12, 24],
+ window_size=7,
+ mlp_ratio=4.0,
+ qkv_bias=True,
+ qk_scale=None,
+ drop_rate=0.0,
+ attn_drop_rate=0.0,
+ drop_path_rate=0.2,
+ norm_layer=nn.LayerNorm,
+ ape=False,
+ patch_norm=True,
+ out_indices=(0, 1, 2, 3),
+ frozen_stages=-1,
+ use_checkpoint=False,
+ ):
+ super().__init__()
+
+ self.pretrain_img_size = pretrain_img_size
+ self.num_layers = len(depths)
+ self.embed_dim = embed_dim
+ self.ape = ape
+ self.patch_norm = patch_norm
+ self.out_indices = out_indices
+ self.frozen_stages = frozen_stages
+
+ # split image into non-overlapping patches
+ self.patch_embed = PatchEmbed(
+ patch_size=patch_size,
+ in_chans=in_chans,
+ embed_dim=embed_dim,
+ norm_layer=norm_layer if self.patch_norm else None,
+ )
+
+ # absolute position embedding
+ if self.ape:
+ pretrain_img_size = to_2tuple(pretrain_img_size)
+ patch_size = to_2tuple(patch_size)
+ patches_resolution = [
+ pretrain_img_size[0] // patch_size[0],
+ pretrain_img_size[1] // patch_size[1],
+ ]
+
+ self.absolute_pos_embed = nn.Parameter(
+ torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])
+ )
+ trunc_normal_(self.absolute_pos_embed, std=0.02)
+
+ self.pos_drop = nn.Dropout(p=drop_rate)
+
+ # stochastic depth
+ dpr = [
+ x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))
+ ] # stochastic depth decay rule
+
+ # build layers
+ self.layers = nn.ModuleList()
+ for i_layer in range(self.num_layers):
+ layer = BasicLayer(
+ dim=int(embed_dim * 2 ** i_layer),
+ depth=depths[i_layer],
+ num_heads=num_heads[i_layer],
+ window_size=window_size,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop=drop_rate,
+ attn_drop=attn_drop_rate,
+ drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])],
+ norm_layer=norm_layer,
+ downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
+ use_checkpoint=use_checkpoint,
+ )
+ self.layers.append(layer)
+
+ num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
+ self.num_features = num_features
+
+ # add a norm layer for each output
+ for i_layer in out_indices:
+ layer = norm_layer(num_features[i_layer])
+ layer_name = f"norm{i_layer}"
+ self.add_module(layer_name, layer)
+
+ self._freeze_stages()
+
+ def _freeze_stages(self):
+ if self.frozen_stages >= 0:
+ self.patch_embed.eval()
+ for param in self.patch_embed.parameters():
+ param.requires_grad = False
+
+ if self.frozen_stages >= 1 and self.ape:
+ self.absolute_pos_embed.requires_grad = False
+
+ if self.frozen_stages >= 2:
+ self.pos_drop.eval()
+ for i in range(0, self.frozen_stages - 1):
+ m = self.layers[i]
+ m.eval()
+ for param in m.parameters():
+ param.requires_grad = False
+
+ def init_weights(self, pretrained=None):
+ """Initialize the weights in backbone.
+ Args:
+ pretrained (str, optional): Path to pre-trained weights.
+ Defaults to None.
+ """
+
+ def _init_weights(m):
+ if isinstance(m, nn.Linear):
+ trunc_normal_(m.weight, std=0.02)
+ if isinstance(m, nn.Linear) and m.bias is not None:
+ nn.init.constant_(m.bias, 0)
+ elif isinstance(m, nn.LayerNorm):
+ nn.init.constant_(m.bias, 0)
+ nn.init.constant_(m.weight, 1.0)
+
+ def forward(self, x):
+ """Forward function."""
+ x = self.patch_embed(x)
+
+ Wh, Ww = x.size(2), x.size(3)
+ if self.ape:
+ # interpolate the position embedding to the corresponding size
+ absolute_pos_embed = F.interpolate(
+ self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic"
+ )
+ x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
+ else:
+ x = x.flatten(2).transpose(1, 2)
+ x = self.pos_drop(x)
+
+ outs = {}
+ for i in range(self.num_layers):
+ layer = self.layers[i]
+ x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
+
+ if i in self.out_indices:
+ norm_layer = getattr(self, f"norm{i}")
+ x_out = norm_layer(x_out)
+
+ out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
+ outs["res{}".format(i + 2)] = out
+
+ return outs
+
+ def train(self, mode=True):
+ """Convert the model into training mode while keep layers freezed."""
+ super(SwinTransformer, self).train(mode)
+ self._freeze_stages()
+
+
+@BACKBONE_REGISTRY.register()
+class D2SwinTransformer(SwinTransformer, Backbone):
+ def __init__(self, cfg, input_shape):
+
+ pretrain_img_size = cfg.MODEL.SWIN.PRETRAIN_IMG_SIZE
+ patch_size = cfg.MODEL.SWIN.PATCH_SIZE
+ in_chans = 3
+ embed_dim = cfg.MODEL.SWIN.EMBED_DIM
+ depths = cfg.MODEL.SWIN.DEPTHS
+ num_heads = cfg.MODEL.SWIN.NUM_HEADS
+ window_size = cfg.MODEL.SWIN.WINDOW_SIZE
+ mlp_ratio = cfg.MODEL.SWIN.MLP_RATIO
+ qkv_bias = cfg.MODEL.SWIN.QKV_BIAS
+ qk_scale = cfg.MODEL.SWIN.QK_SCALE
+ drop_rate = cfg.MODEL.SWIN.DROP_RATE
+ attn_drop_rate = cfg.MODEL.SWIN.ATTN_DROP_RATE
+ drop_path_rate = cfg.MODEL.SWIN.DROP_PATH_RATE
+ norm_layer = nn.LayerNorm
+ ape = cfg.MODEL.SWIN.APE
+ patch_norm = cfg.MODEL.SWIN.PATCH_NORM
+ use_checkpoint = cfg.MODEL.SWIN.USE_CHECKPOINT
+
+ super().__init__(
+ pretrain_img_size,
+ patch_size,
+ in_chans,
+ embed_dim,
+ depths,
+ num_heads,
+ window_size,
+ mlp_ratio,
+ qkv_bias,
+ qk_scale,
+ drop_rate,
+ attn_drop_rate,
+ drop_path_rate,
+ norm_layer,
+ ape,
+ patch_norm,
+ use_checkpoint=use_checkpoint,
+ )
+
+ self._out_features = cfg.MODEL.SWIN.OUT_FEATURES
+
+ self._out_feature_strides = {
+ "res2": 4,
+ "res3": 8,
+ "res4": 16,
+ "res5": 32,
+ }
+ self._out_feature_channels = {
+ "res2": self.num_features[0],
+ "res3": self.num_features[1],
+ "res4": self.num_features[2],
+ "res5": self.num_features[3],
+ }
+
+ def forward(self, x):
+ """
+ Args:
+ x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``.
+ Returns:
+ dict[str->Tensor]: names and the corresponding features
+ """
+ assert (
+ x.dim() == 4
+ ), f"SwinTransformer takes an input of shape (N, C, H, W). Got {x.shape} instead!"
+ outputs = {}
+ y = super().forward(x)
+ for k in y.keys():
+ if k in self._out_features:
+ outputs[k] = y[k]
+ return outputs
+
+ def output_shape(self):
+ return {
+ name: ShapeSpec(
+ channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
+ )
+ for name in self._out_features
+ }
+
+ @property
+ def size_divisibility(self):
+ return 32
diff --git a/modeling/pixel_decoder/__init__.py b/modeling/pixel_decoder/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..9020c2df23e2af280b7bb168b996ae9eaf312eb8
--- /dev/null
+++ b/modeling/pixel_decoder/__init__.py
@@ -0,0 +1 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
diff --git a/modeling/pixel_decoder/__pycache__/__init__.cpython-37.pyc b/modeling/pixel_decoder/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..74d4aa04e80e3e22f09e3e928a229508948364a6
Binary files /dev/null and b/modeling/pixel_decoder/__pycache__/__init__.cpython-37.pyc differ
diff --git a/modeling/pixel_decoder/__pycache__/__init__.cpython-38.pyc b/modeling/pixel_decoder/__pycache__/__init__.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..976c34489f4732915909b3e46e458c598aa4f89f
Binary files /dev/null and b/modeling/pixel_decoder/__pycache__/__init__.cpython-38.pyc differ
diff --git a/modeling/pixel_decoder/__pycache__/msdeformattn.cpython-37.pyc b/modeling/pixel_decoder/__pycache__/msdeformattn.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e322b9bd789f723154eb8e2f1c308b0e0f979f1a
Binary files /dev/null and b/modeling/pixel_decoder/__pycache__/msdeformattn.cpython-37.pyc differ
diff --git a/modeling/pixel_decoder/__pycache__/msdeformattn.cpython-38.pyc b/modeling/pixel_decoder/__pycache__/msdeformattn.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..08166610b795baa041f84ed20c3e67bbe3e84504
Binary files /dev/null and b/modeling/pixel_decoder/__pycache__/msdeformattn.cpython-38.pyc differ
diff --git a/modeling/pixel_decoder/msdeformattn.py b/modeling/pixel_decoder/msdeformattn.py
new file mode 100644
index 0000000000000000000000000000000000000000..80008afdf93cb6188d5406da5d195683a4a1c668
--- /dev/null
+++ b/modeling/pixel_decoder/msdeformattn.py
@@ -0,0 +1,311 @@
+#!/usr/bin/env python
+# -*- encoding: utf-8 -*-
+'''
+@File : msdeformattn.py
+@Time : 2022/10/02 16:51:09
+@Author : BQH
+@Version : 1.0
+@Contact : raogx.vip@hotmail.com
+@License : (C)Copyright 2017-2018, Liugroup-NLPR-CASIA
+@Desc : 修改自Mask2former,移除detectron2依赖
+'''
+
+# here put the import lib
+
+import numpy as np
+import fvcore.nn.weight_init as weight_init
+import torch
+from torch import nn
+from torch.nn import functional as F
+
+
+from ..transformer_decoder.position_encoding import PositionEmbeddingSine
+from ..transformer_decoder.transformer import _get_clones, _get_activation_fn
+from .ops.modules import MSDeformAttn
+
+# MSDeformAttn Transformer encoder in deformable detr
+class MSDeformAttnTransformerEncoderLayer(nn.Module):
+ def __init__(self,
+ d_model=256, d_ffn=1024,
+ dropout=0.1, activation="relu",
+ n_levels=4, n_heads=8, n_points=4):
+ super().__init__()
+
+ # self attention
+ self.self_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points)
+ self.dropout1 = nn.Dropout(dropout)
+ self.norm1 = nn.LayerNorm(d_model)
+
+ # ffn
+ self.linear1 = nn.Linear(d_model, d_ffn)
+ self.activation = _get_activation_fn(activation)
+ self.dropout2 = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(d_ffn, d_model)
+ self.dropout3 = nn.Dropout(dropout)
+ self.norm2 = nn.LayerNorm(d_model)
+
+ @staticmethod
+ def with_pos_embed(tensor, pos):
+ return tensor if pos is None else tensor + pos
+
+ def forward_ffn(self, src):
+ src2 = self.linear2(self.dropout2(self.activation(self.linear1(src))))
+ src = src + self.dropout3(src2)
+ src = self.norm2(src)
+ return src
+
+ def forward(self, src, pos, reference_points, spatial_shapes, level_start_index, padding_mask=None):
+ # self attention
+ src2 = self.self_attn(self.with_pos_embed(src, pos), reference_points, src, spatial_shapes, level_start_index, padding_mask)
+ src = src + self.dropout1(src2)
+ src = self.norm1(src)
+
+ # ffn
+ src = self.forward_ffn(src)
+
+ return src
+
+
+class MSDeformAttnTransformerEncoder(nn.Module):
+ def __init__(self, encoder_layer, num_layers):
+ super().__init__()
+ self.layers = _get_clones(encoder_layer, num_layers)
+ self.num_layers = num_layers
+
+ @staticmethod
+ def get_reference_points(spatial_shapes, valid_ratios, device):
+ reference_points_list = []
+ for lvl, (H_, W_) in enumerate(spatial_shapes):
+ ref_y, ref_x = torch.meshgrid(torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device),
+ torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device))
+ ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_)
+ ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_)
+ ref = torch.stack((ref_x, ref_y), -1) # [1, H_ * W_, 2]
+ reference_points_list.append(ref)
+ reference_points = torch.cat(reference_points_list, 1)
+ reference_points = reference_points[:, :, None] * valid_ratios[:, None]
+ return reference_points
+
+ def forward(self, src, spatial_shapes, level_start_index, valid_ratios, pos=None, padding_mask=None):
+ output = src
+ reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=src.device)
+ for _, layer in enumerate(self.layers):
+ output = layer(output, pos, reference_points, spatial_shapes, level_start_index, padding_mask)
+
+ return output
+
+
+class MSDeformAttnTransformerEncoderOnly(nn.Module):
+ def __init__(self, d_model=256, nhead=8,
+ num_encoder_layers=6, dim_feedforward=1024, dropout=0.1,
+ activation="relu",
+ num_feature_levels=4, enc_n_points=4,
+ ):
+ super().__init__()
+
+ self.d_model = d_model
+ self.nhead = nhead
+
+ encoder_layer = MSDeformAttnTransformerEncoderLayer(d_model, dim_feedforward,
+ dropout, activation,
+ num_feature_levels, nhead, enc_n_points)
+ self.encoder = MSDeformAttnTransformerEncoder(encoder_layer, num_encoder_layers)
+
+ self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model))
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ nn.init.xavier_uniform_(p)
+ for m in self.modules():
+ if isinstance(m, MSDeformAttn):
+ m._reset_parameters()
+ nn.init.normal_(self.level_embed)
+
+ def get_valid_ratio(self, mask):
+ _, H, W = mask.shape
+ valid_H = torch.sum(~mask[:, :, 0], 1)
+ valid_W = torch.sum(~mask[:, 0, :], 1)
+ valid_ratio_h = valid_H.float() / H
+ valid_ratio_w = valid_W.float() / W
+ valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1)
+ return valid_ratio
+
+ def forward(self, srcs, pos_embeds):
+ masks = [torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool) for x in srcs]
+ # prepare input for encoder
+ src_flatten = []
+ mask_flatten = []
+ lvl_pos_embed_flatten = []
+ spatial_shapes = []
+ for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)):
+ bs, c, h, w = src.shape
+ spatial_shape = (h, w)
+ spatial_shapes.append(spatial_shape)
+ src = src.flatten(2).transpose(1, 2)
+ mask = mask.flatten(1)
+ pos_embed = pos_embed.flatten(2).transpose(1, 2)
+ lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1)
+ lvl_pos_embed_flatten.append(lvl_pos_embed)
+ src_flatten.append(src)
+ mask_flatten.append(mask)
+ src_flatten = torch.cat(src_flatten, 1)
+ mask_flatten = torch.cat(mask_flatten, 1)
+ lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1)
+ spatial_shapes = torch.as_tensor(spatial_shapes, dtype=torch.long, device=src_flatten.device)
+ level_start_index = torch.cat((spatial_shapes.new_zeros((1, )), spatial_shapes.prod(1).cumsum(0)[:-1]))
+ valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1)
+
+ # encoder
+ memory = self.encoder(src_flatten, spatial_shapes, level_start_index, valid_ratios, lvl_pos_embed_flatten, mask_flatten)
+
+ return memory, spatial_shapes, level_start_index
+
+class MSDeformAttnPixelDecoder(nn.Module):
+ def __init__(
+ self,
+ input_shape,
+ transformer_dropout=0.1,
+ transformer_nheads=8,
+ transformer_dim_feedforward=2048,
+ transformer_enc_layers=6,
+ conv_dim=256,
+ mask_dim=256,
+
+ # deformable transformer encoder args
+ transformer_in_features= ["res3", "res4", "res5"],
+ common_stride=4,
+ ):
+ super().__init__()
+ # backbone中["res3", "res4", "res5"]特征层的(channel, stride), eg. [(32,4), (64, 8),(128, 16),(256, 32)]
+ transformer_input_shape = {k: v for k, v in input_shape.items() if k in transformer_in_features}
+
+ # this is the input shape of pixel decoder
+ self.in_features = [k for k, v in input_shape.items()] # starting from "res3" to "res5"
+ self.feature_channels = [v.channel for k, v in input_shape.items()] # eg. [16, 64, 128, 256]
+
+ # this is the input shape of transformer encoder (could use less features than pixel decoder
+ self.transformer_in_features = [k for k, v in transformer_input_shape.items()] # starting from "res3" to "res5"
+ transformer_in_channels = [v.channel for k, v in transformer_input_shape.items()] # eg. [64, 128, 256]
+ self.transformer_feature_strides = [v.stride for k, v in transformer_input_shape.items()] # to decide extra FPN layers
+
+ self.transformer_num_feature_levels = len(self.transformer_in_features)
+ if self.transformer_num_feature_levels > 1:
+ input_proj_list = []
+ # from low resolution to high resolution (res5 -> res3)
+ for in_channels in transformer_in_channels[::-1]:
+ input_proj_list.append(nn.Sequential(
+ nn.Conv2d(in_channels, conv_dim, kernel_size=1),
+ nn.GroupNorm(32, conv_dim),
+ ))
+ self.input_proj = nn.ModuleList(input_proj_list)
+ else:
+ self.input_proj = nn.ModuleList([
+ nn.Sequential(
+ nn.Conv2d(transformer_in_channels[-1], conv_dim, kernel_size=1),
+ nn.GroupNorm(32, conv_dim),
+ )])
+
+ for proj in self.input_proj:
+ nn.init.xavier_uniform_(proj[0].weight, gain=1)
+ nn.init.constant_(proj[0].bias, 0)
+
+ self.transformer = MSDeformAttnTransformerEncoderOnly(
+ d_model=conv_dim,
+ dropout=transformer_dropout,
+ nhead=transformer_nheads,
+ dim_feedforward=transformer_dim_feedforward,
+ num_encoder_layers=transformer_enc_layers,
+ num_feature_levels=self.transformer_num_feature_levels,
+ )
+ N_steps = conv_dim // 2
+ self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
+
+ self.mask_dim = mask_dim
+ # use 1x1 conv instead
+ self.mask_features = nn.Conv2d(
+ conv_dim,
+ mask_dim,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ )
+ weight_init.c2_xavier_fill(self.mask_features)
+
+ self.maskformer_num_feature_levels = 3 # always use 3 scales
+ self.common_stride = common_stride
+
+ # extra fpn levels
+ stride = min(self.transformer_feature_strides)
+ self.num_fpn_levels = int(np.log2(stride) - np.log2(self.common_stride))
+
+ lateral_convs = []
+ output_convs = []
+
+ for idx, in_channels in enumerate(self.feature_channels[:self.num_fpn_levels]): # res2 -> fpn
+ lateral_conv = nn.Sequential(nn.Conv2d(in_channels, conv_dim, kernel_size=1),
+ nn.GroupNorm(32, conv_dim),
+ nn.ReLU(inplace=True))
+
+ output_conv = nn.Sequential(nn.Conv2d(conv_dim, conv_dim, kernel_size=3, stride=1, padding=1),
+ nn.GroupNorm(32, conv_dim),
+ nn.ReLU(inplace=True))
+
+ weight_init.c2_xavier_fill(lateral_conv[0])
+ weight_init.c2_xavier_fill(output_conv[0])
+ self.add_module("adapter_{}".format(idx + 1), lateral_conv)
+ self.add_module("layer_{}".format(idx + 1), output_conv)
+
+ lateral_convs.append(lateral_conv)
+ output_convs.append(output_conv)
+ # Place convs into top-down order (from low to high resolution)
+ # to make the top-down computation in forward clearer.
+ self.lateral_convs = lateral_convs[::-1]
+ self.output_convs = output_convs[::-1]
+
+ def forward_features(self, features):
+ srcs = []
+ pos = []
+ # Reverse feature maps into top-down order (from low to high resolution), 'res5' -> 'res3'
+ for idx, f in enumerate(self.transformer_in_features[::-1]):
+ x = features[f].float() # deformable detr does not support half precision
+ srcs.append(self.input_proj[idx](x))
+ pos.append(self.pe_layer(x))
+
+ y, spatial_shapes, level_start_index = self.transformer(srcs, pos)
+ bs = y.shape[0]
+
+ split_size_or_sections = [None] * self.transformer_num_feature_levels
+ for i in range(self.transformer_num_feature_levels):
+ if i < self.transformer_num_feature_levels - 1:
+ split_size_or_sections[i] = level_start_index[i + 1] - level_start_index[i]
+ else:
+ split_size_or_sections[i] = y.shape[1] - level_start_index[i]
+ y = torch.split(y, split_size_or_sections, dim=1)
+
+ out = []
+ multi_scale_features = []
+ num_cur_levels = 0
+ for i, z in enumerate(y):
+ out.append(z.transpose(1, 2).view(bs, -1, spatial_shapes[i][0], spatial_shapes[i][1]))
+
+ # append `out` with extra FPN levels
+ # Reverse feature maps into top-down order (from low to high resolution)
+ for idx, f in enumerate(self.in_features[:self.num_fpn_levels][::-1]):
+ x = features[f].float()
+ lateral_conv = self.lateral_convs[idx]
+ output_conv = self.output_convs[idx]
+ cur_fpn = lateral_conv(x)
+ # Following FPN implementation, we use nearest upsampling here
+ y = cur_fpn + F.interpolate(out[-1], size=cur_fpn.shape[-2:], mode="bilinear", align_corners=False)
+ y = output_conv(y)
+ out.append(y)
+
+ for o in out:
+ if num_cur_levels < self.maskformer_num_feature_levels:
+ multi_scale_features.append(o)
+ num_cur_levels += 1
+
+ return self.mask_features(out[-1]), out[0], multi_scale_features
diff --git a/modeling/pixel_decoder/ops/functions/__init__.py b/modeling/pixel_decoder/ops/functions/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..29ad241567448e2d6513c327dbeca2403a4fc3c2
--- /dev/null
+++ b/modeling/pixel_decoder/ops/functions/__init__.py
@@ -0,0 +1,13 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+#from .ms_deform_attn_func import MSDeformAttnFunction
+
diff --git a/modeling/pixel_decoder/ops/functions/__pycache__/__init__.cpython-37.pyc b/modeling/pixel_decoder/ops/functions/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..f350e87d033e25a00c61091fdc7c1df6d14c9694
Binary files /dev/null and b/modeling/pixel_decoder/ops/functions/__pycache__/__init__.cpython-37.pyc differ
diff --git a/modeling/pixel_decoder/ops/functions/__pycache__/__init__.cpython-38.pyc b/modeling/pixel_decoder/ops/functions/__pycache__/__init__.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..60bd06aa600b9083763343d3041641fa247c8b28
Binary files /dev/null and b/modeling/pixel_decoder/ops/functions/__pycache__/__init__.cpython-38.pyc differ
diff --git a/modeling/pixel_decoder/ops/functions/__pycache__/ms_deform_attn_func.cpython-37.pyc b/modeling/pixel_decoder/ops/functions/__pycache__/ms_deform_attn_func.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..8102426d2854cb34d74a21f392c7bd8a6bddd42c
Binary files /dev/null and b/modeling/pixel_decoder/ops/functions/__pycache__/ms_deform_attn_func.cpython-37.pyc differ
diff --git a/modeling/pixel_decoder/ops/functions/__pycache__/ms_deform_attn_func.cpython-38.pyc b/modeling/pixel_decoder/ops/functions/__pycache__/ms_deform_attn_func.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..567a64a0e41b036dab51f4e587a12a3977c1a0b8
Binary files /dev/null and b/modeling/pixel_decoder/ops/functions/__pycache__/ms_deform_attn_func.cpython-38.pyc differ
diff --git a/modeling/pixel_decoder/ops/functions/ms_deform_attn_func.py b/modeling/pixel_decoder/ops/functions/ms_deform_attn_func.py
new file mode 100644
index 0000000000000000000000000000000000000000..fdd325f53009bf252bda31d5dfbe4c713b495933
--- /dev/null
+++ b/modeling/pixel_decoder/ops/functions/ms_deform_attn_func.py
@@ -0,0 +1,77 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+from __future__ import absolute_import
+from __future__ import print_function
+from __future__ import division
+
+import torch
+import torch.nn.functional as F
+from torch.autograd import Function
+from torch.autograd.function import once_differentiable
+
+#try:
+# import MultiScaleDeformableAttention as MSDA
+#except ModuleNotFoundError as e:
+# info_string = (
+# "\n\nPlease compile MultiScaleDeformableAttention CUDA op with the following commands:\n"
+# "\t`cd mask2former/modeling/pixel_decoder/ops`\n"
+# "\t`sh make.sh`\n"
+# )
+# raise ModuleNotFoundError(info_string)
+#
+#
+#class MSDeformAttnFunction(Function):
+# @staticmethod
+# def forward(ctx, value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, im2col_step):
+# ctx.im2col_step = im2col_step
+# output = MSDA.ms_deform_attn_forward(
+# value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, ctx.im2col_step)
+# ctx.save_for_backward(value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights)
+# return output
+#
+# @staticmethod
+# @once_differentiable
+# def backward(ctx, grad_output):
+# value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights = ctx.saved_tensors
+# grad_value, grad_sampling_loc, grad_attn_weight = \
+# MSDA.ms_deform_attn_backward(
+# value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, grad_output, ctx.im2col_step)
+#
+# return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None
+
+
+def ms_deform_attn_core_pytorch(value, value_spatial_shapes, sampling_locations, attention_weights):
+ """
+ @value: bs, sum(h, w), num_head, dim
+ @sampling_locations: bs, sum(h, w), num_head, num_layer, 4, 2
+ @attention_weights: bs, sum(h, w), num_head, num_layer, 4
+ """
+ N_, S_, M_, Dim = value.shape
+ _, Lq_, M_, L_, P_, _ = sampling_locations.shape
+ value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1)
+ sampling_grids = 2 * sampling_locations - 1 # 把范围从[0,1]转换到[-1,1], F.grid_sample要求grid的范围是[-1,1]
+ sampling_value_list = []
+ for lid_, (H_, W_) in enumerate(value_spatial_shapes):
+ # N_, H_*W_, M_, D_ -> N_, H_*W_, M_*D_ -> N_, M_*D_, H_*W_ -> N_*M_, H_, W_
+ value_l_ = value_list[lid_].flatten(2).transpose(1, 2).reshape(N_*M_, Dim, H_, W_) # eg. [bs * 8, 32, 28, 28, 28]
+ # N_, Lq_, M_, P_, 3 -> N_, M_, Lq_, P_, 3 -> N_*M_, Lq_, P_, 3
+ sampling_grid_l_ = sampling_grids[:, :, :, lid_]
+ sampling_grid_l_ = sampling_grid_l_.transpose(1, 2).flatten(0, 1) # eg. [bs * 8, 1045, 3, 3]
+ # N_*M_, D_, Lq_, P_
+ sampling_value_l_ = F.grid_sample(value_l_, sampling_grid_l_, mode='bilinear', padding_mode='zeros', align_corners=False) # eg. [bs * 8, 32, 1045, 4]
+ sampling_value_list.append(sampling_value_l_)
+
+ # (N_, Lq_, M_, L_, P_) -> (N_, M_, Lq_, L_, P_) -> (N_, M_, 1, Lq_, L_*P_)
+ attention_weights = attention_weights.transpose(1, 2).reshape(N_*M_, 1, Lq_, L_*P_) # eg. [bs * 8, 1, 1045, 4 * 4], 4个特征层 * 4个采样点
+ # torch.stack(sampling_value_list, dim=-2): [bs * 8, 32, 1045, 4, num_layer] -> [bs * 8, 32, 1045, 4 * 4], 4个特征层 * 4个采样点
+ output = (torch.stack(sampling_value_list, dim=-2).squeeze(2).flatten(-2) * attention_weights).sum(-1).view(N_, M_*Dim, Lq_)
+ return output.transpose(1, 2).contiguous()
diff --git a/modeling/pixel_decoder/ops/make.sh b/modeling/pixel_decoder/ops/make.sh
new file mode 100644
index 0000000000000000000000000000000000000000..7b38cdbf48f3571d986a33e7563b517952b51bb2
--- /dev/null
+++ b/modeling/pixel_decoder/ops/make.sh
@@ -0,0 +1,13 @@
+#!/usr/bin/env bash
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+python setup.py build install
diff --git a/modeling/pixel_decoder/ops/modules/__init__.py b/modeling/pixel_decoder/ops/modules/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..6fdbf03359958f3d67ab00f879bf6b61a6c8f06a
--- /dev/null
+++ b/modeling/pixel_decoder/ops/modules/__init__.py
@@ -0,0 +1,12 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+from .ms_deform_attn import MSDeformAttn
diff --git a/modeling/pixel_decoder/ops/modules/__pycache__/__init__.cpython-310.pyc b/modeling/pixel_decoder/ops/modules/__pycache__/__init__.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..3bb652cb561f53fd9acfe2836175b52f5de7635b
Binary files /dev/null and b/modeling/pixel_decoder/ops/modules/__pycache__/__init__.cpython-310.pyc differ
diff --git a/modeling/pixel_decoder/ops/modules/__pycache__/__init__.cpython-37.pyc b/modeling/pixel_decoder/ops/modules/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..62728959456a0ed66bcc920dc94e6dcd920ccaf3
Binary files /dev/null and b/modeling/pixel_decoder/ops/modules/__pycache__/__init__.cpython-37.pyc differ
diff --git a/modeling/pixel_decoder/ops/modules/__pycache__/__init__.cpython-38.pyc b/modeling/pixel_decoder/ops/modules/__pycache__/__init__.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..6e71723b5d9d9659f3dcdaf456616b2856404697
Binary files /dev/null and b/modeling/pixel_decoder/ops/modules/__pycache__/__init__.cpython-38.pyc differ
diff --git a/modeling/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-310.pyc b/modeling/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..16c420d9adcc2ed93a2d1b1dc54708fa4baf7bba
Binary files /dev/null and b/modeling/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-310.pyc differ
diff --git a/modeling/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-37.pyc b/modeling/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..a4d77afef99f07bbb29a4b7bebf60f20e9141969
Binary files /dev/null and b/modeling/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-37.pyc differ
diff --git a/modeling/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-38.pyc b/modeling/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e75a2d265dec1e20f434027ec6614d8c2444d8f0
Binary files /dev/null and b/modeling/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-38.pyc differ
diff --git a/modeling/pixel_decoder/ops/modules/ms_deform_attn.py b/modeling/pixel_decoder/ops/modules/ms_deform_attn.py
new file mode 100644
index 0000000000000000000000000000000000000000..3d4b0602ef468c48dd60351ba5047d1b1921bcce
--- /dev/null
+++ b/modeling/pixel_decoder/ops/modules/ms_deform_attn.py
@@ -0,0 +1,121 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+from __future__ import absolute_import
+from __future__ import print_function
+from __future__ import division
+
+import warnings
+import math
+
+import torch
+from torch import nn
+import torch.nn.functional as F
+from torch.nn.init import xavier_uniform_, constant_
+
+#from ..functions import MSDeformAttnFunction
+from ..functions.ms_deform_attn_func import ms_deform_attn_core_pytorch
+
+
+def _is_power_of_2(n):
+ if (not isinstance(n, int)) or (n < 0):
+ raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n)))
+ return (n & (n-1) == 0) and n != 0
+
+
+class MSDeformAttn(nn.Module):
+ def __init__(self, d_model=256, n_levels=4, n_heads=8, n_points=4):
+ """
+ Multi-Scale Deformable Attention Module
+ :param d_model hidden dimension
+ :param n_levels number of feature levels
+ :param n_heads number of attention heads
+ :param n_points number of sampling points per attention head per feature level
+ """
+ super().__init__()
+ if d_model % n_heads != 0:
+ raise ValueError('d_model must be divisible by n_heads, but got {} and {}'.format(d_model, n_heads))
+ _d_per_head = d_model // n_heads
+ # you'd better set _d_per_head to a power of 2 which is more efficient in our CUDA implementation
+ if not _is_power_of_2(_d_per_head):
+ warnings.warn("You'd better set d_model in MSDeformAttn to make the dimension of each attention head a power of 2 "
+ "which is more efficient in our CUDA implementation.")
+
+ self.im2col_step = 128
+
+ self.d_model = d_model
+ self.n_levels = n_levels
+ self.n_heads = n_heads
+ self.n_points = n_points
+
+ self.sampling_offsets = nn.Linear(d_model, n_heads * n_levels * n_points * 2)
+ self.attention_weights = nn.Linear(d_model, n_heads * n_levels * n_points)
+ self.value_proj = nn.Linear(d_model, d_model)
+ self.output_proj = nn.Linear(d_model, d_model)
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ constant_(self.sampling_offsets.weight.data, 0.)
+ thetas = torch.arange(self.n_heads, dtype=torch.float32) * (2.0 * math.pi / self.n_heads)
+ grid_init = torch.stack([thetas.cos(), thetas.sin()], -1)
+ grid_init = (grid_init / grid_init.abs().max(-1, keepdim=True)[0]).view(self.n_heads, 1, 1, 2).repeat(1, self.n_levels, self.n_points, 1)
+ for i in range(self.n_points):
+ grid_init[:, :, i, :] *= i + 1
+ with torch.no_grad():
+ self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1))
+ constant_(self.attention_weights.weight.data, 0.)
+ constant_(self.attention_weights.bias.data, 0.)
+ xavier_uniform_(self.value_proj.weight.data)
+ constant_(self.value_proj.bias.data, 0.)
+ xavier_uniform_(self.output_proj.weight.data)
+ constant_(self.output_proj.bias.data, 0.)
+
+ def forward(self, query, reference_points, input_flatten, input_spatial_shapes, input_level_start_index, input_padding_mask=None):
+ """
+ :param query (N, Length_{query}, C)
+ :param reference_points (N, Length_{query}, n_levels, 2), range in [0, 1], top-left (0,0), bottom-right (1, 1), including padding area
+ or (N, Length_{query}, n_levels, 4), add additional (w, h) to form reference boxes
+ :param input_flatten (N, \sum_{l=0}^{L-1} H_l \cdot W_l, C)
+ :param input_spatial_shapes (n_levels, 2), [(H_0, W_0), (H_1, W_1), ..., (H_{L-1}, W_{L-1})]
+ :param input_level_start_index (n_levels, ), [0, H_0*W_0, H_0*W_0+H_1*W_1, H_0*W_0+H_1*W_1+H_2*W_2, ..., H_0*W_0+H_1*W_1+...+H_{L-1}*W_{L-1}]
+ :param input_padding_mask (N, \sum_{l=0}^{L-1} H_l \cdot W_l), True for padding elements, False for non-padding elements
+
+ :return output (N, Length_{query}, C)
+ """
+ N, Len_q, _ = query.shape
+ N, Len_in, _ = input_flatten.shape
+ assert (input_spatial_shapes[:, 0] * input_spatial_shapes[:, 1]).sum() == Len_in
+
+ value = self.value_proj(input_flatten)
+ if input_padding_mask is not None:
+ value = value.masked_fill(input_padding_mask[..., None], float(0))
+ value = value.view(N, Len_in, self.n_heads, self.d_model // self.n_heads)
+ sampling_offsets = self.sampling_offsets(query).view(N, Len_q, self.n_heads, self.n_levels, self.n_points, 2)
+ attention_weights = self.attention_weights(query).view(N, Len_q, self.n_heads, self.n_levels * self.n_points)
+ attention_weights = F.softmax(attention_weights, -1).view(N, Len_q, self.n_heads, self.n_levels, self.n_points)
+
+ # N, Len_q, n_heads, n_levels, n_points, 3
+ # input_spatial_shapes是以(h, w)格式存储,reference_points中以(x, y)格式存储,所以需要调换input_spatial_shapes中(h,w)的顺序
+ offset_normalizer = torch.stack([input_spatial_shapes[..., 1], input_spatial_shapes[..., 0]], -1)
+ sampling_locations = reference_points[:, :, None, :, None, :] + sampling_offsets / offset_normalizer[None, None, None, :, None, :]
+
+ #output = MSDeformAttnFunction.apply(value, input_spatial_shapes, input_level_start_index, sampling_locations, attention_weights, self.im2col_step)
+ # try:
+ # output = MSDeformAttnFunction.apply(value, input_spatial_shapes, input_level_start_index, sampling_locations, attention_weights, self.im2col_step)
+ # except:
+ # CPU, for debug or test only
+ # output = ms_deform_attn_core_pytorch(value, input_spatial_shapes, sampling_locations, attention_weights)
+
+ ## For FLOPs calculation only
+ output = ms_deform_attn_core_pytorch(value, input_spatial_shapes, sampling_locations, attention_weights)
+ output = self.output_proj(output)
+ return output
diff --git a/modeling/pixel_decoder/ops/setup.py b/modeling/pixel_decoder/ops/setup.py
new file mode 100644
index 0000000000000000000000000000000000000000..3b57ad313ac8f9b6586892142da8ba943e516cec
--- /dev/null
+++ b/modeling/pixel_decoder/ops/setup.py
@@ -0,0 +1,78 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+import os
+import glob
+
+import torch
+
+from torch.utils.cpp_extension import CUDA_HOME
+from torch.utils.cpp_extension import CppExtension
+from torch.utils.cpp_extension import CUDAExtension
+
+from setuptools import find_packages
+from setuptools import setup
+
+requirements = ["torch", "torchvision"]
+
+def get_extensions():
+ this_dir = os.path.dirname(os.path.abspath(__file__))
+ extensions_dir = os.path.join(this_dir, "src")
+
+ main_file = glob.glob(os.path.join(extensions_dir, "*.cpp"))
+ source_cpu = glob.glob(os.path.join(extensions_dir, "cpu", "*.cpp"))
+ source_cuda = glob.glob(os.path.join(extensions_dir, "cuda", "*.cu"))
+
+ sources = main_file + source_cpu
+ extension = CppExtension
+ extra_compile_args = {"cxx": []}
+ define_macros = []
+
+ # Force cuda since torch ask for a device, not if cuda is in fact available.
+ if (os.environ.get('FORCE_CUDA') or torch.cuda.is_available()) and CUDA_HOME is not None:
+ extension = CUDAExtension
+ sources += source_cuda
+ define_macros += [("WITH_CUDA", None)]
+ extra_compile_args["nvcc"] = [
+ "-DCUDA_HAS_FP16=1",
+ "-D__CUDA_NO_HALF_OPERATORS__",
+ "-D__CUDA_NO_HALF_CONVERSIONS__",
+ "-D__CUDA_NO_HALF2_OPERATORS__",
+ ]
+ else:
+ if CUDA_HOME is None:
+ raise NotImplementedError('CUDA_HOME is None. Please set environment variable CUDA_HOME.')
+ else:
+ raise NotImplementedError('No CUDA runtime is found. Please set FORCE_CUDA=1 or test it by running torch.cuda.is_available().')
+
+ sources = [os.path.join(extensions_dir, s) for s in sources]
+ include_dirs = [extensions_dir]
+ ext_modules = [
+ extension(
+ "MultiScaleDeformableAttention",
+ sources,
+ include_dirs=include_dirs,
+ define_macros=define_macros,
+ extra_compile_args=extra_compile_args,
+ )
+ ]
+ return ext_modules
+
+setup(
+ name="MultiScaleDeformableAttention",
+ version="1.0",
+ author="Weijie Su",
+ url="https://github.com/fundamentalvision/Deformable-DETR",
+ description="PyTorch Wrapper for CUDA Functions of Multi-Scale Deformable Attention",
+ packages=find_packages(exclude=("configs", "tests",)),
+ ext_modules=get_extensions(),
+ cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension},
+)
diff --git a/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp b/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp
new file mode 100644
index 0000000000000000000000000000000000000000..48757e2b0156b2c1513b615d2a17e5aee5172ae7
--- /dev/null
+++ b/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp
@@ -0,0 +1,46 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#include
+
+#include
+#include
+
+
+at::Tensor
+ms_deform_attn_cpu_forward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const int im2col_step)
+{
+ AT_ERROR("Not implement on cpu");
+}
+
+std::vector
+ms_deform_attn_cpu_backward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const at::Tensor &grad_output,
+ const int im2col_step)
+{
+ AT_ERROR("Not implement on cpu");
+}
+
diff --git a/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h b/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h
new file mode 100644
index 0000000000000000000000000000000000000000..51bb27e9ee828f967e8aa854c2d55574040c6d7e
--- /dev/null
+++ b/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h
@@ -0,0 +1,38 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#pragma once
+#include
+
+at::Tensor
+ms_deform_attn_cpu_forward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const int im2col_step);
+
+std::vector
+ms_deform_attn_cpu_backward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const at::Tensor &grad_output,
+ const int im2col_step);
+
+
diff --git a/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.cu b/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.cu
new file mode 100644
index 0000000000000000000000000000000000000000..0c465dab3d636dfd6a44523c63f148b6e15084d9
--- /dev/null
+++ b/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.cu
@@ -0,0 +1,158 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#include
+#include "cuda/ms_deform_im2col_cuda.cuh"
+
+#include
+#include
+#include
+#include
+
+
+at::Tensor ms_deform_attn_cuda_forward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const int im2col_step)
+{
+ AT_ASSERTM(value.is_contiguous(), "value tensor has to be contiguous");
+ AT_ASSERTM(spatial_shapes.is_contiguous(), "spatial_shapes tensor has to be contiguous");
+ AT_ASSERTM(level_start_index.is_contiguous(), "level_start_index tensor has to be contiguous");
+ AT_ASSERTM(sampling_loc.is_contiguous(), "sampling_loc tensor has to be contiguous");
+ AT_ASSERTM(attn_weight.is_contiguous(), "attn_weight tensor has to be contiguous");
+
+ AT_ASSERTM(value.type().is_cuda(), "value must be a CUDA tensor");
+ AT_ASSERTM(spatial_shapes.type().is_cuda(), "spatial_shapes must be a CUDA tensor");
+ AT_ASSERTM(level_start_index.type().is_cuda(), "level_start_index must be a CUDA tensor");
+ AT_ASSERTM(sampling_loc.type().is_cuda(), "sampling_loc must be a CUDA tensor");
+ AT_ASSERTM(attn_weight.type().is_cuda(), "attn_weight must be a CUDA tensor");
+
+ const int batch = value.size(0);
+ const int spatial_size = value.size(1);
+ const int num_heads = value.size(2);
+ const int channels = value.size(3);
+
+ const int num_levels = spatial_shapes.size(0);
+
+ const int num_query = sampling_loc.size(1);
+ const int num_point = sampling_loc.size(4);
+
+ const int im2col_step_ = std::min(batch, im2col_step);
+
+ AT_ASSERTM(batch % im2col_step_ == 0, "batch(%d) must divide im2col_step(%d)", batch, im2col_step_);
+
+ auto output = at::zeros({batch, num_query, num_heads, channels}, value.options());
+
+ const int batch_n = im2col_step_;
+ auto output_n = output.view({batch/im2col_step_, batch_n, num_query, num_heads, channels});
+ auto per_value_size = spatial_size * num_heads * channels;
+ auto per_sample_loc_size = num_query * num_heads * num_levels * num_point * 2;
+ auto per_attn_weight_size = num_query * num_heads * num_levels * num_point;
+ for (int n = 0; n < batch/im2col_step_; ++n)
+ {
+ auto columns = output_n.select(0, n);
+ AT_DISPATCH_FLOATING_TYPES(value.type(), "ms_deform_attn_forward_cuda", ([&] {
+ ms_deformable_im2col_cuda(at::cuda::getCurrentCUDAStream(),
+ value.data() + n * im2col_step_ * per_value_size,
+ spatial_shapes.data(),
+ level_start_index.data(),
+ sampling_loc.data() + n * im2col_step_ * per_sample_loc_size,
+ attn_weight.data() + n * im2col_step_ * per_attn_weight_size,
+ batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point,
+ columns.data());
+
+ }));
+ }
+
+ output = output.view({batch, num_query, num_heads*channels});
+
+ return output;
+}
+
+
+std::vector ms_deform_attn_cuda_backward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const at::Tensor &grad_output,
+ const int im2col_step)
+{
+
+ AT_ASSERTM(value.is_contiguous(), "value tensor has to be contiguous");
+ AT_ASSERTM(spatial_shapes.is_contiguous(), "spatial_shapes tensor has to be contiguous");
+ AT_ASSERTM(level_start_index.is_contiguous(), "level_start_index tensor has to be contiguous");
+ AT_ASSERTM(sampling_loc.is_contiguous(), "sampling_loc tensor has to be contiguous");
+ AT_ASSERTM(attn_weight.is_contiguous(), "attn_weight tensor has to be contiguous");
+ AT_ASSERTM(grad_output.is_contiguous(), "grad_output tensor has to be contiguous");
+
+ AT_ASSERTM(value.type().is_cuda(), "value must be a CUDA tensor");
+ AT_ASSERTM(spatial_shapes.type().is_cuda(), "spatial_shapes must be a CUDA tensor");
+ AT_ASSERTM(level_start_index.type().is_cuda(), "level_start_index must be a CUDA tensor");
+ AT_ASSERTM(sampling_loc.type().is_cuda(), "sampling_loc must be a CUDA tensor");
+ AT_ASSERTM(attn_weight.type().is_cuda(), "attn_weight must be a CUDA tensor");
+ AT_ASSERTM(grad_output.type().is_cuda(), "grad_output must be a CUDA tensor");
+
+ const int batch = value.size(0);
+ const int spatial_size = value.size(1);
+ const int num_heads = value.size(2);
+ const int channels = value.size(3);
+
+ const int num_levels = spatial_shapes.size(0);
+
+ const int num_query = sampling_loc.size(1);
+ const int num_point = sampling_loc.size(4);
+
+ const int im2col_step_ = std::min(batch, im2col_step);
+
+ AT_ASSERTM(batch % im2col_step_ == 0, "batch(%d) must divide im2col_step(%d)", batch, im2col_step_);
+
+ auto grad_value = at::zeros_like(value);
+ auto grad_sampling_loc = at::zeros_like(sampling_loc);
+ auto grad_attn_weight = at::zeros_like(attn_weight);
+
+ const int batch_n = im2col_step_;
+ auto per_value_size = spatial_size * num_heads * channels;
+ auto per_sample_loc_size = num_query * num_heads * num_levels * num_point * 2;
+ auto per_attn_weight_size = num_query * num_heads * num_levels * num_point;
+ auto grad_output_n = grad_output.view({batch/im2col_step_, batch_n, num_query, num_heads, channels});
+
+ for (int n = 0; n < batch/im2col_step_; ++n)
+ {
+ auto grad_output_g = grad_output_n.select(0, n);
+ AT_DISPATCH_FLOATING_TYPES(value.type(), "ms_deform_attn_backward_cuda", ([&] {
+ ms_deformable_col2im_cuda(at::cuda::getCurrentCUDAStream(),
+ grad_output_g.data(),
+ value.data() + n * im2col_step_ * per_value_size,
+ spatial_shapes.data(),
+ level_start_index.data(),
+ sampling_loc.data() + n * im2col_step_ * per_sample_loc_size,
+ attn_weight.data() + n * im2col_step_ * per_attn_weight_size,
+ batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point,
+ grad_value.data() + n * im2col_step_ * per_value_size,
+ grad_sampling_loc.data() + n * im2col_step_ * per_sample_loc_size,
+ grad_attn_weight.data() + n * im2col_step_ * per_attn_weight_size);
+
+ }));
+ }
+
+ return {
+ grad_value, grad_sampling_loc, grad_attn_weight
+ };
+}
\ No newline at end of file
diff --git a/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h b/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h
new file mode 100644
index 0000000000000000000000000000000000000000..4f0658e8668a11f0e7d71deff9adac71884f2e87
--- /dev/null
+++ b/modeling/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h
@@ -0,0 +1,35 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#pragma once
+#include
+
+at::Tensor ms_deform_attn_cuda_forward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const int im2col_step);
+
+std::vector ms_deform_attn_cuda_backward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const at::Tensor &grad_output,
+ const int im2col_step);
+
diff --git a/modeling/pixel_decoder/ops/src/cuda/ms_deform_im2col_cuda.cuh b/modeling/pixel_decoder/ops/src/cuda/ms_deform_im2col_cuda.cuh
new file mode 100644
index 0000000000000000000000000000000000000000..c04e0d4ab97d25c1756fcd8d08dd1e5a6d280b7c
--- /dev/null
+++ b/modeling/pixel_decoder/ops/src/cuda/ms_deform_im2col_cuda.cuh
@@ -0,0 +1,1332 @@
+/*!
+**************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************
+* Modified from DCN (https://github.com/msracver/Deformable-ConvNets)
+* Copyright (c) 2018 Microsoft
+**************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#include
+#include
+#include
+
+#include
+#include
+
+#include
+
+#define CUDA_KERNEL_LOOP(i, n) \
+ for (int i = blockIdx.x * blockDim.x + threadIdx.x; \
+ i < (n); \
+ i += blockDim.x * gridDim.x)
+
+const int CUDA_NUM_THREADS = 1024;
+inline int GET_BLOCKS(const int N, const int num_threads)
+{
+ return (N + num_threads - 1) / num_threads;
+}
+
+
+template
+__device__ scalar_t ms_deform_attn_im2col_bilinear(const scalar_t* &bottom_data,
+ const int &height, const int &width, const int &nheads, const int &channels,
+ const scalar_t &h, const scalar_t &w, const int &m, const int &c)
+{
+ const int h_low = floor(h);
+ const int w_low = floor(w);
+ const int h_high = h_low + 1;
+ const int w_high = w_low + 1;
+
+ const scalar_t lh = h - h_low;
+ const scalar_t lw = w - w_low;
+ const scalar_t hh = 1 - lh, hw = 1 - lw;
+
+ const int w_stride = nheads * channels;
+ const int h_stride = width * w_stride;
+ const int h_low_ptr_offset = h_low * h_stride;
+ const int h_high_ptr_offset = h_low_ptr_offset + h_stride;
+ const int w_low_ptr_offset = w_low * w_stride;
+ const int w_high_ptr_offset = w_low_ptr_offset + w_stride;
+ const int base_ptr = m * channels + c;
+
+ scalar_t v1 = 0;
+ if (h_low >= 0 && w_low >= 0)
+ {
+ const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr;
+ v1 = bottom_data[ptr1];
+ }
+ scalar_t v2 = 0;
+ if (h_low >= 0 && w_high <= width - 1)
+ {
+ const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr;
+ v2 = bottom_data[ptr2];
+ }
+ scalar_t v3 = 0;
+ if (h_high <= height - 1 && w_low >= 0)
+ {
+ const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr;
+ v3 = bottom_data[ptr3];
+ }
+ scalar_t v4 = 0;
+ if (h_high <= height - 1 && w_high <= width - 1)
+ {
+ const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr;
+ v4 = bottom_data[ptr4];
+ }
+
+ const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;
+
+ const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);
+ return val;
+}
+
+
+template
+__device__ void ms_deform_attn_col2im_bilinear(const scalar_t* &bottom_data,
+ const int &height, const int &width, const int &nheads, const int &channels,
+ const scalar_t &h, const scalar_t &w, const int &m, const int &c,
+ const scalar_t &top_grad,
+ const scalar_t &attn_weight,
+ scalar_t* &grad_value,
+ scalar_t* grad_sampling_loc,
+ scalar_t* grad_attn_weight)
+{
+ const int h_low = floor(h);
+ const int w_low = floor(w);
+ const int h_high = h_low + 1;
+ const int w_high = w_low + 1;
+
+ const scalar_t lh = h - h_low;
+ const scalar_t lw = w - w_low;
+ const scalar_t hh = 1 - lh, hw = 1 - lw;
+
+ const int w_stride = nheads * channels;
+ const int h_stride = width * w_stride;
+ const int h_low_ptr_offset = h_low * h_stride;
+ const int h_high_ptr_offset = h_low_ptr_offset + h_stride;
+ const int w_low_ptr_offset = w_low * w_stride;
+ const int w_high_ptr_offset = w_low_ptr_offset + w_stride;
+ const int base_ptr = m * channels + c;
+
+ const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;
+ const scalar_t top_grad_value = top_grad * attn_weight;
+ scalar_t grad_h_weight = 0, grad_w_weight = 0;
+
+ scalar_t v1 = 0;
+ if (h_low >= 0 && w_low >= 0)
+ {
+ const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr;
+ v1 = bottom_data[ptr1];
+ grad_h_weight -= hw * v1;
+ grad_w_weight -= hh * v1;
+ atomicAdd(grad_value+ptr1, w1*top_grad_value);
+ }
+ scalar_t v2 = 0;
+ if (h_low >= 0 && w_high <= width - 1)
+ {
+ const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr;
+ v2 = bottom_data[ptr2];
+ grad_h_weight -= lw * v2;
+ grad_w_weight += hh * v2;
+ atomicAdd(grad_value+ptr2, w2*top_grad_value);
+ }
+ scalar_t v3 = 0;
+ if (h_high <= height - 1 && w_low >= 0)
+ {
+ const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr;
+ v3 = bottom_data[ptr3];
+ grad_h_weight += hw * v3;
+ grad_w_weight -= lh * v3;
+ atomicAdd(grad_value+ptr3, w3*top_grad_value);
+ }
+ scalar_t v4 = 0;
+ if (h_high <= height - 1 && w_high <= width - 1)
+ {
+ const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr;
+ v4 = bottom_data[ptr4];
+ grad_h_weight += lw * v4;
+ grad_w_weight += lh * v4;
+ atomicAdd(grad_value+ptr4, w4*top_grad_value);
+ }
+
+ const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);
+ *grad_attn_weight = top_grad * val;
+ *grad_sampling_loc = width * grad_w_weight * top_grad_value;
+ *(grad_sampling_loc + 1) = height * grad_h_weight * top_grad_value;
+}
+
+
+template
+__device__ void ms_deform_attn_col2im_bilinear_gm(const scalar_t* &bottom_data,
+ const int &height, const int &width, const int &nheads, const int &channels,
+ const scalar_t &h, const scalar_t &w, const int &m, const int &c,
+ const scalar_t &top_grad,
+ const scalar_t &attn_weight,
+ scalar_t* &grad_value,
+ scalar_t* grad_sampling_loc,
+ scalar_t* grad_attn_weight)
+{
+ const int h_low = floor(h);
+ const int w_low = floor(w);
+ const int h_high = h_low + 1;
+ const int w_high = w_low + 1;
+
+ const scalar_t lh = h - h_low;
+ const scalar_t lw = w - w_low;
+ const scalar_t hh = 1 - lh, hw = 1 - lw;
+
+ const int w_stride = nheads * channels;
+ const int h_stride = width * w_stride;
+ const int h_low_ptr_offset = h_low * h_stride;
+ const int h_high_ptr_offset = h_low_ptr_offset + h_stride;
+ const int w_low_ptr_offset = w_low * w_stride;
+ const int w_high_ptr_offset = w_low_ptr_offset + w_stride;
+ const int base_ptr = m * channels + c;
+
+ const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;
+ const scalar_t top_grad_value = top_grad * attn_weight;
+ scalar_t grad_h_weight = 0, grad_w_weight = 0;
+
+ scalar_t v1 = 0;
+ if (h_low >= 0 && w_low >= 0)
+ {
+ const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr;
+ v1 = bottom_data[ptr1];
+ grad_h_weight -= hw * v1;
+ grad_w_weight -= hh * v1;
+ atomicAdd(grad_value+ptr1, w1*top_grad_value);
+ }
+ scalar_t v2 = 0;
+ if (h_low >= 0 && w_high <= width - 1)
+ {
+ const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr;
+ v2 = bottom_data[ptr2];
+ grad_h_weight -= lw * v2;
+ grad_w_weight += hh * v2;
+ atomicAdd(grad_value+ptr2, w2*top_grad_value);
+ }
+ scalar_t v3 = 0;
+ if (h_high <= height - 1 && w_low >= 0)
+ {
+ const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr;
+ v3 = bottom_data[ptr3];
+ grad_h_weight += hw * v3;
+ grad_w_weight -= lh * v3;
+ atomicAdd(grad_value+ptr3, w3*top_grad_value);
+ }
+ scalar_t v4 = 0;
+ if (h_high <= height - 1 && w_high <= width - 1)
+ {
+ const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr;
+ v4 = bottom_data[ptr4];
+ grad_h_weight += lw * v4;
+ grad_w_weight += lh * v4;
+ atomicAdd(grad_value+ptr4, w4*top_grad_value);
+ }
+
+ const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);
+ atomicAdd(grad_attn_weight, top_grad * val);
+ atomicAdd(grad_sampling_loc, width * grad_w_weight * top_grad_value);
+ atomicAdd(grad_sampling_loc + 1, height * grad_h_weight * top_grad_value);
+}
+
+
+template
+__global__ void ms_deformable_im2col_gpu_kernel(const int n,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *data_col)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ scalar_t *data_col_ptr = data_col + index;
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+ scalar_t col = 0;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const scalar_t *data_value_ptr = data_value + (data_value_ptr_init_offset + level_start_id * qid_stride);
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ col += ms_deform_attn_im2col_bilinear(data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col) * weight;
+ }
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ }
+ }
+ *data_col_ptr = col;
+ }
+}
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ __shared__ scalar_t cache_grad_sampling_loc[blockSize * 2];
+ __shared__ scalar_t cache_grad_attn_weight[blockSize];
+ unsigned int tid = threadIdx.x;
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;
+ *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;
+ *(cache_grad_attn_weight+threadIdx.x)=0;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);
+ }
+
+ __syncthreads();
+ if (tid == 0)
+ {
+ scalar_t _grad_w=cache_grad_sampling_loc[0], _grad_h=cache_grad_sampling_loc[1], _grad_a=cache_grad_attn_weight[0];
+ int sid=2;
+ for (unsigned int tid = 1; tid < blockSize; ++tid)
+ {
+ _grad_w += cache_grad_sampling_loc[sid];
+ _grad_h += cache_grad_sampling_loc[sid + 1];
+ _grad_a += cache_grad_attn_weight[tid];
+ sid += 2;
+ }
+
+
+ *grad_sampling_loc = _grad_w;
+ *(grad_sampling_loc + 1) = _grad_h;
+ *grad_attn_weight = _grad_a;
+ }
+ __syncthreads();
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ __shared__ scalar_t cache_grad_sampling_loc[blockSize * 2];
+ __shared__ scalar_t cache_grad_attn_weight[blockSize];
+ unsigned int tid = threadIdx.x;
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;
+ *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;
+ *(cache_grad_attn_weight+threadIdx.x)=0;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);
+ }
+
+ __syncthreads();
+
+ for (unsigned int s=blockSize/2; s>0; s>>=1)
+ {
+ if (tid < s) {
+ const unsigned int xid1 = tid << 1;
+ const unsigned int xid2 = (tid + s) << 1;
+ cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s];
+ cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2];
+ cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1];
+ }
+ __syncthreads();
+ }
+
+ if (tid == 0)
+ {
+ *grad_sampling_loc = cache_grad_sampling_loc[0];
+ *(grad_sampling_loc + 1) = cache_grad_sampling_loc[1];
+ *grad_attn_weight = cache_grad_attn_weight[0];
+ }
+ __syncthreads();
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v1(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ extern __shared__ int _s[];
+ scalar_t* cache_grad_sampling_loc = (scalar_t*)_s;
+ scalar_t* cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x;
+ unsigned int tid = threadIdx.x;
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;
+ *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;
+ *(cache_grad_attn_weight+threadIdx.x)=0;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);
+ }
+
+ __syncthreads();
+ if (tid == 0)
+ {
+ scalar_t _grad_w=cache_grad_sampling_loc[0], _grad_h=cache_grad_sampling_loc[1], _grad_a=cache_grad_attn_weight[0];
+ int sid=2;
+ for (unsigned int tid = 1; tid < blockDim.x; ++tid)
+ {
+ _grad_w += cache_grad_sampling_loc[sid];
+ _grad_h += cache_grad_sampling_loc[sid + 1];
+ _grad_a += cache_grad_attn_weight[tid];
+ sid += 2;
+ }
+
+
+ *grad_sampling_loc = _grad_w;
+ *(grad_sampling_loc + 1) = _grad_h;
+ *grad_attn_weight = _grad_a;
+ }
+ __syncthreads();
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v2(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ extern __shared__ int _s[];
+ scalar_t* cache_grad_sampling_loc = (scalar_t*)_s;
+ scalar_t* cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x;
+ unsigned int tid = threadIdx.x;
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;
+ *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;
+ *(cache_grad_attn_weight+threadIdx.x)=0;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);
+ }
+
+ __syncthreads();
+
+ for (unsigned int s=blockDim.x/2, spre=blockDim.x; s>0; s>>=1, spre>>=1)
+ {
+ if (tid < s) {
+ const unsigned int xid1 = tid << 1;
+ const unsigned int xid2 = (tid + s) << 1;
+ cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s];
+ cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2];
+ cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1];
+ if (tid + (s << 1) < spre)
+ {
+ cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + (s << 1)];
+ cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2 + (s << 1)];
+ cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1 + (s << 1)];
+ }
+ }
+ __syncthreads();
+ }
+
+ if (tid == 0)
+ {
+ *grad_sampling_loc = cache_grad_sampling_loc[0];
+ *(grad_sampling_loc + 1) = cache_grad_sampling_loc[1];
+ *grad_attn_weight = cache_grad_attn_weight[0];
+ }
+ __syncthreads();
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v2_multi_blocks(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ extern __shared__ int _s[];
+ scalar_t* cache_grad_sampling_loc = (scalar_t*)_s;
+ scalar_t* cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x;
+ unsigned int tid = threadIdx.x;
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;
+ *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;
+ *(cache_grad_attn_weight+threadIdx.x)=0;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);
+ }
+
+ __syncthreads();
+
+ for (unsigned int s=blockDim.x/2, spre=blockDim.x; s>0; s>>=1, spre>>=1)
+ {
+ if (tid < s) {
+ const unsigned int xid1 = tid << 1;
+ const unsigned int xid2 = (tid + s) << 1;
+ cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s];
+ cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2];
+ cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1];
+ if (tid + (s << 1) < spre)
+ {
+ cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + (s << 1)];
+ cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2 + (s << 1)];
+ cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1 + (s << 1)];
+ }
+ }
+ __syncthreads();
+ }
+
+ if (tid == 0)
+ {
+ atomicAdd(grad_sampling_loc, cache_grad_sampling_loc[0]);
+ atomicAdd(grad_sampling_loc + 1, cache_grad_sampling_loc[1]);
+ atomicAdd(grad_attn_weight, cache_grad_attn_weight[0]);
+ }
+ __syncthreads();
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_gm(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear_gm(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ grad_sampling_loc, grad_attn_weight);
+ }
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+
+template
+void ms_deformable_im2col_cuda(cudaStream_t stream,
+ const scalar_t* data_value,
+ const int64_t* data_spatial_shapes,
+ const int64_t* data_level_start_index,
+ const scalar_t* data_sampling_loc,
+ const scalar_t* data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t* data_col)
+{
+ const int num_kernels = batch_size * num_query * num_heads * channels;
+ const int num_actual_kernels = batch_size * num_query * num_heads * channels;
+ const int num_threads = CUDA_NUM_THREADS;
+ ms_deformable_im2col_gpu_kernel
+ <<>>(
+ num_kernels, data_value, data_spatial_shapes, data_level_start_index, data_sampling_loc, data_attn_weight,
+ batch_size, spatial_size, num_heads, channels, num_levels, num_query, num_point, data_col);
+
+ cudaError_t err = cudaGetLastError();
+ if (err != cudaSuccess)
+ {
+ printf("error in ms_deformable_im2col_cuda: %s\n", cudaGetErrorString(err));
+ }
+
+}
+
+template
+void ms_deformable_col2im_cuda(cudaStream_t stream,
+ const scalar_t* grad_col,
+ const scalar_t* data_value,
+ const int64_t * data_spatial_shapes,
+ const int64_t * data_level_start_index,
+ const scalar_t * data_sampling_loc,
+ const scalar_t * data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t* grad_value,
+ scalar_t* grad_sampling_loc,
+ scalar_t* grad_attn_weight)
+{
+ const int num_threads = (channels > CUDA_NUM_THREADS)?CUDA_NUM_THREADS:channels;
+ const int num_kernels = batch_size * num_query * num_heads * channels;
+ const int num_actual_kernels = batch_size * num_query * num_heads * channels;
+ if (channels > 1024)
+ {
+ if ((channels & 1023) == 0)
+ {
+ ms_deformable_col2im_gpu_kernel_shm_reduce_v2_multi_blocks
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ }
+ else
+ {
+ ms_deformable_col2im_gpu_kernel_gm
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ }
+ }
+ else{
+ switch(channels)
+ {
+ case 1:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 2:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 4:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 8:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 16:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 32:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 64:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 128:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 256:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 512:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 1024:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ default:
+ if (channels < 64)
+ {
+ ms_deformable_col2im_gpu_kernel_shm_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ }
+ else
+ {
+ ms_deformable_col2im_gpu_kernel_shm_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ }
+ }
+ }
+ cudaError_t err = cudaGetLastError();
+ if (err != cudaSuccess)
+ {
+ printf("error in ms_deformable_col2im_cuda: %s\n", cudaGetErrorString(err));
+ }
+
+}
\ No newline at end of file
diff --git a/modeling/pixel_decoder/ops/src/ms_deform_attn.h b/modeling/pixel_decoder/ops/src/ms_deform_attn.h
new file mode 100644
index 0000000000000000000000000000000000000000..2f80a1b294c55b37d13bb3558ff7aeadba3b37de
--- /dev/null
+++ b/modeling/pixel_decoder/ops/src/ms_deform_attn.h
@@ -0,0 +1,67 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#pragma once
+
+#include "cpu/ms_deform_attn_cpu.h"
+
+#ifdef WITH_CUDA
+#include "cuda/ms_deform_attn_cuda.h"
+#endif
+
+
+at::Tensor
+ms_deform_attn_forward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const int im2col_step)
+{
+ if (value.type().is_cuda())
+ {
+#ifdef WITH_CUDA
+ return ms_deform_attn_cuda_forward(
+ value, spatial_shapes, level_start_index, sampling_loc, attn_weight, im2col_step);
+#else
+ AT_ERROR("Not compiled with GPU support");
+#endif
+ }
+ AT_ERROR("Not implemented on the CPU");
+}
+
+std::vector
+ms_deform_attn_backward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const at::Tensor &grad_output,
+ const int im2col_step)
+{
+ if (value.type().is_cuda())
+ {
+#ifdef WITH_CUDA
+ return ms_deform_attn_cuda_backward(
+ value, spatial_shapes, level_start_index, sampling_loc, attn_weight, grad_output, im2col_step);
+#else
+ AT_ERROR("Not compiled with GPU support");
+#endif
+ }
+ AT_ERROR("Not implemented on the CPU");
+}
+
diff --git a/modeling/pixel_decoder/ops/src/vision.cpp b/modeling/pixel_decoder/ops/src/vision.cpp
new file mode 100644
index 0000000000000000000000000000000000000000..4a08821e0121a77556aa7a263ec8ebfa928b13b6
--- /dev/null
+++ b/modeling/pixel_decoder/ops/src/vision.cpp
@@ -0,0 +1,21 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#include "ms_deform_attn.h"
+
+PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
+ m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward");
+ m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward");
+}
diff --git a/modeling/pixel_decoder/ops/test.py b/modeling/pixel_decoder/ops/test.py
new file mode 100644
index 0000000000000000000000000000000000000000..6e1b545459f6fd3235767e721eb5a1090ae14bef
--- /dev/null
+++ b/modeling/pixel_decoder/ops/test.py
@@ -0,0 +1,92 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+from __future__ import absolute_import
+from __future__ import print_function
+from __future__ import division
+
+import time
+import torch
+import torch.nn as nn
+from torch.autograd import gradcheck
+
+from functions.ms_deform_attn_func import MSDeformAttnFunction, ms_deform_attn_core_pytorch
+
+
+N, M, D = 1, 2, 2
+Lq, L, P = 2, 2, 2
+shapes = torch.as_tensor([(6, 4), (3, 2)], dtype=torch.long).cuda()
+level_start_index = torch.cat((shapes.new_zeros((1, )), shapes.prod(1).cumsum(0)[:-1]))
+S = sum([(H*W).item() for H, W in shapes])
+
+
+torch.manual_seed(3)
+
+
+@torch.no_grad()
+def check_forward_equal_with_pytorch_double():
+ value = torch.rand(N, S, M, D).cuda() * 0.01
+ sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()
+ attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5
+ attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)
+ im2col_step = 2
+ output_pytorch = ms_deform_attn_core_pytorch(value.double(), shapes, sampling_locations.double(), attention_weights.double()).detach().cpu()
+ output_cuda = MSDeformAttnFunction.apply(value.double(), shapes, level_start_index, sampling_locations.double(), attention_weights.double(), im2col_step).detach().cpu()
+ fwdok = torch.allclose(output_cuda, output_pytorch)
+ max_abs_err = (output_cuda - output_pytorch).abs().max()
+ max_rel_err = ((output_cuda - output_pytorch).abs() / output_pytorch.abs()).max()
+
+ print(f'* {fwdok} check_forward_equal_with_pytorch_double: max_abs_err {max_abs_err:.2e} max_rel_err {max_rel_err:.2e}')
+
+
+@torch.no_grad()
+def check_forward_equal_with_pytorch_float():
+ value = torch.rand(N, S, M, D).cuda() * 0.01
+ sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()
+ attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5
+ attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)
+ im2col_step = 2
+ output_pytorch = ms_deform_attn_core_pytorch(value, shapes, sampling_locations, attention_weights).detach().cpu()
+ output_cuda = MSDeformAttnFunction.apply(value, shapes, level_start_index, sampling_locations, attention_weights, im2col_step).detach().cpu()
+ fwdok = torch.allclose(output_cuda, output_pytorch, rtol=1e-2, atol=1e-3)
+ max_abs_err = (output_cuda - output_pytorch).abs().max()
+ max_rel_err = ((output_cuda - output_pytorch).abs() / output_pytorch.abs()).max()
+
+ print(f'* {fwdok} check_forward_equal_with_pytorch_float: max_abs_err {max_abs_err:.2e} max_rel_err {max_rel_err:.2e}')
+
+
+def check_gradient_numerical(channels=4, grad_value=True, grad_sampling_loc=True, grad_attn_weight=True):
+
+ value = torch.rand(N, S, M, channels).cuda() * 0.01
+ sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()
+ attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5
+ attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)
+ im2col_step = 2
+ func = MSDeformAttnFunction.apply
+
+ value.requires_grad = grad_value
+ sampling_locations.requires_grad = grad_sampling_loc
+ attention_weights.requires_grad = grad_attn_weight
+
+ gradok = gradcheck(func, (value.double(), shapes, level_start_index, sampling_locations.double(), attention_weights.double(), im2col_step))
+
+ print(f'* {gradok} check_gradient_numerical(D={channels})')
+
+
+if __name__ == '__main__':
+ check_forward_equal_with_pytorch_double()
+ check_forward_equal_with_pytorch_float()
+
+ for channels in [30, 32, 64, 71, 1025, 2048, 3096]:
+ check_gradient_numerical(channels, True, True, True)
+
+
+
diff --git a/modeling/transformer_decoder/__init__.py b/modeling/transformer_decoder/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..ddcf38e78f3bbb2380b0a246000bcb5e5b385619
--- /dev/null
+++ b/modeling/transformer_decoder/__init__.py
@@ -0,0 +1,3 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+from .maskformer_transformer_decoder import StandardTransformerDecoder
+from .mask2former_transformer_decoder import MultiScaleMaskedTransformerDecoder
diff --git a/modeling/transformer_decoder/__pycache__/__init__.cpython-37.pyc b/modeling/transformer_decoder/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..62217aaf3688c74d134dd505a51053fdcb3967c2
Binary files /dev/null and b/modeling/transformer_decoder/__pycache__/__init__.cpython-37.pyc differ
diff --git a/modeling/transformer_decoder/__pycache__/__init__.cpython-38.pyc b/modeling/transformer_decoder/__pycache__/__init__.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..a872d6d9cb7088f6bf1557ee15f3a51ed0cd1574
Binary files /dev/null and b/modeling/transformer_decoder/__pycache__/__init__.cpython-38.pyc differ
diff --git a/modeling/transformer_decoder/__pycache__/mask2former_transformer_decoder.cpython-37.pyc b/modeling/transformer_decoder/__pycache__/mask2former_transformer_decoder.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..86e909e86a5825d426c64bce6fa111cd69c2675f
Binary files /dev/null and b/modeling/transformer_decoder/__pycache__/mask2former_transformer_decoder.cpython-37.pyc differ
diff --git a/modeling/transformer_decoder/__pycache__/mask2former_transformer_decoder.cpython-38.pyc b/modeling/transformer_decoder/__pycache__/mask2former_transformer_decoder.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..53a267b116680daa57ac3d3b23c3e169bd7b1b20
Binary files /dev/null and b/modeling/transformer_decoder/__pycache__/mask2former_transformer_decoder.cpython-38.pyc differ
diff --git a/modeling/transformer_decoder/__pycache__/maskformer_transformer_decoder.cpython-37.pyc b/modeling/transformer_decoder/__pycache__/maskformer_transformer_decoder.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..4884e691afbb800795aced7c7fb6ab887eb88319
Binary files /dev/null and b/modeling/transformer_decoder/__pycache__/maskformer_transformer_decoder.cpython-37.pyc differ
diff --git a/modeling/transformer_decoder/__pycache__/maskformer_transformer_decoder.cpython-38.pyc b/modeling/transformer_decoder/__pycache__/maskformer_transformer_decoder.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..aa3de4f616bae23fc52bc4f6d3d6b1fc7389fddb
Binary files /dev/null and b/modeling/transformer_decoder/__pycache__/maskformer_transformer_decoder.cpython-38.pyc differ
diff --git a/modeling/transformer_decoder/__pycache__/position_encoding.cpython-37.pyc b/modeling/transformer_decoder/__pycache__/position_encoding.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..d2e5ab22f22869115db2b803e1226cf03d0b97d3
Binary files /dev/null and b/modeling/transformer_decoder/__pycache__/position_encoding.cpython-37.pyc differ
diff --git a/modeling/transformer_decoder/__pycache__/position_encoding.cpython-38.pyc b/modeling/transformer_decoder/__pycache__/position_encoding.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..d09ae06bef7bd3b2a851556511adb640959540e9
Binary files /dev/null and b/modeling/transformer_decoder/__pycache__/position_encoding.cpython-38.pyc differ
diff --git a/modeling/transformer_decoder/__pycache__/transformer.cpython-37.pyc b/modeling/transformer_decoder/__pycache__/transformer.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..de2c8a59a9fae243beeebb7e710eed9fe53a88bb
Binary files /dev/null and b/modeling/transformer_decoder/__pycache__/transformer.cpython-37.pyc differ
diff --git a/modeling/transformer_decoder/__pycache__/transformer.cpython-38.pyc b/modeling/transformer_decoder/__pycache__/transformer.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..0df144d12210d7db9597c84b9df9bb85f00e8097
Binary files /dev/null and b/modeling/transformer_decoder/__pycache__/transformer.cpython-38.pyc differ
diff --git a/modeling/transformer_decoder/mask2former_transformer_decoder.py b/modeling/transformer_decoder/mask2former_transformer_decoder.py
new file mode 100644
index 0000000000000000000000000000000000000000..f7cf30ff98dea67aa734b0b242826c450ccf7fda
--- /dev/null
+++ b/modeling/transformer_decoder/mask2former_transformer_decoder.py
@@ -0,0 +1,387 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/detr.py
+import fvcore.nn.weight_init as weight_init
+from typing import Optional
+import torch
+from torch import nn, Tensor
+from torch.nn import functional as F
+
+from .position_encoding import PositionEmbeddingSine
+
+class SelfAttentionLayer(nn.Module):
+
+ def __init__(self, d_model, nhead, dropout=0.0,
+ activation="relu", normalize_before=False):
+ super().__init__()
+ self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
+
+ self.norm = nn.LayerNorm(d_model)
+ self.dropout = nn.Dropout(dropout)
+
+ self.activation = _get_activation_fn(activation)
+ self.normalize_before = normalize_before
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ nn.init.xavier_uniform_(p)
+
+ def with_pos_embed(self, tensor, pos: Optional[Tensor]):
+ return tensor if pos is None else tensor + pos
+
+ def forward_post(self, tgt,
+ tgt_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ q = k = self.with_pos_embed(tgt, query_pos)
+ tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask,
+ key_padding_mask=tgt_key_padding_mask)[0]
+ tgt = tgt + self.dropout(tgt2)
+ tgt = self.norm(tgt)
+
+ return tgt
+
+ def forward_pre(self, tgt,
+ tgt_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ tgt2 = self.norm(tgt)
+ q = k = self.with_pos_embed(tgt2, query_pos)
+ tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask,
+ key_padding_mask=tgt_key_padding_mask)[0]
+ tgt = tgt + self.dropout(tgt2)
+
+ return tgt
+
+ def forward(self, tgt,
+ tgt_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ if self.normalize_before:
+ return self.forward_pre(tgt, tgt_mask,
+ tgt_key_padding_mask, query_pos)
+ return self.forward_post(tgt, tgt_mask,
+ tgt_key_padding_mask, query_pos)
+
+
+class CrossAttentionLayer(nn.Module):
+
+ def __init__(self, d_model, nhead, dropout=0.0,
+ activation="relu", normalize_before=False):
+ super().__init__()
+ self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
+
+ self.norm = nn.LayerNorm(d_model)
+ self.dropout = nn.Dropout(dropout)
+
+ self.activation = _get_activation_fn(activation)
+ self.normalize_before = normalize_before
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ nn.init.xavier_uniform_(p)
+
+ def with_pos_embed(self, tensor, pos: Optional[Tensor]):
+ return tensor if pos is None else tensor + pos
+
+ def forward_post(self, tgt, memory,
+ memory_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos),
+ key=self.with_pos_embed(memory, pos),
+ value=memory, attn_mask=memory_mask,
+ key_padding_mask=memory_key_padding_mask)[0]
+ tgt = tgt + self.dropout(tgt2)
+ tgt = self.norm(tgt)
+
+ return tgt
+
+ def forward_pre(self, tgt, memory,
+ memory_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ tgt2 = self.norm(tgt)
+ tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos),
+ key=self.with_pos_embed(memory, pos),
+ value=memory, attn_mask=memory_mask,
+ key_padding_mask=memory_key_padding_mask)[0]
+ tgt = tgt + self.dropout(tgt2)
+
+ return tgt
+
+ def forward(self, tgt, memory,
+ memory_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ if self.normalize_before:
+ return self.forward_pre(tgt, memory, memory_mask,
+ memory_key_padding_mask, pos, query_pos)
+ return self.forward_post(tgt, memory, memory_mask,
+ memory_key_padding_mask, pos, query_pos)
+
+
+class FFNLayer(nn.Module):
+
+ def __init__(self, d_model, dim_feedforward=2048, dropout=0.0,
+ activation="relu", normalize_before=False):
+ super().__init__()
+ # Implementation of Feedforward model
+ self.linear1 = nn.Linear(d_model, dim_feedforward)
+ self.dropout = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(dim_feedforward, d_model)
+
+ self.norm = nn.LayerNorm(d_model)
+
+ self.activation = _get_activation_fn(activation)
+ self.normalize_before = normalize_before
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ nn.init.xavier_uniform_(p)
+
+ def with_pos_embed(self, tensor, pos: Optional[Tensor]):
+ return tensor if pos is None else tensor + pos
+
+ def forward_post(self, tgt):
+ tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
+ tgt = tgt + self.dropout(tgt2)
+ tgt = self.norm(tgt)
+ return tgt
+
+ def forward_pre(self, tgt):
+ tgt2 = self.norm(tgt)
+ tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
+ tgt = tgt + self.dropout(tgt2)
+ return tgt
+
+ def forward(self, tgt):
+ if self.normalize_before:
+ return self.forward_pre(tgt)
+ return self.forward_post(tgt)
+
+
+def _get_activation_fn(activation):
+ """Return an activation function given a string"""
+ if activation == "relu":
+ return F.relu
+ if activation == "gelu":
+ return F.gelu
+ if activation == "glu":
+ return F.glu
+ raise RuntimeError(F"activation should be relu/gelu, not {activation}.")
+
+
+class MLP(nn.Module):
+ """ Very simple multi-layer perceptron (also called FFN)"""
+
+ def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
+ super().__init__()
+ self.num_layers = num_layers
+ h = [hidden_dim] * (num_layers - 1)
+ self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
+
+ def forward(self, x):
+ for i, layer in enumerate(self.layers):
+ x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
+ return x
+
+
+class MultiScaleMaskedTransformerDecoder(nn.Module):
+ def __init__(
+ self,
+ in_channels,
+ num_classes,
+ mask_classification=True,
+ hidden_dim=256,
+ num_queries=100,
+ nheads=8,
+ dim_feedforward=2048,
+ dec_layers=10,
+ pre_norm=False,
+ mask_dim=256,
+ enforce_input_project=False
+ ):
+ super().__init__()
+
+ assert mask_classification, "Only support mask classification model"
+ self.mask_classification = mask_classification
+
+ # positional encoding
+ N_steps = hidden_dim // 2
+ self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
+
+ # define Transformer decoder here
+ self.num_heads = nheads
+ self.num_layers = dec_layers
+ self.transformer_self_attention_layers = nn.ModuleList()
+ self.transformer_cross_attention_layers = nn.ModuleList()
+ self.transformer_ffn_layers = nn.ModuleList()
+
+ for _ in range(self.num_layers):
+ self.transformer_self_attention_layers.append(
+ SelfAttentionLayer(
+ d_model=hidden_dim,
+ nhead=nheads,
+ dropout=0.0,
+ normalize_before=pre_norm,
+ )
+ )
+
+ self.transformer_cross_attention_layers.append(
+ CrossAttentionLayer(
+ d_model=hidden_dim,
+ nhead=nheads,
+ dropout=0.0,
+ normalize_before=pre_norm,
+ )
+ )
+
+ self.transformer_ffn_layers.append(
+ FFNLayer(
+ d_model=hidden_dim,
+ dim_feedforward=dim_feedforward,
+ dropout=0.0,
+ normalize_before=pre_norm,
+ )
+ )
+
+ self.decoder_norm = nn.LayerNorm(hidden_dim)
+
+ self.num_queries = num_queries
+ # learnable query features
+ self.query_feat = nn.Embedding(num_queries, hidden_dim)
+ # learnable query p.e.
+ self.query_embed = nn.Embedding(num_queries, hidden_dim)
+
+ # level embedding (we always use 3 scales)
+ self.num_feature_levels = 3
+ self.level_embed = nn.Embedding(self.num_feature_levels, hidden_dim)
+ self.input_proj = nn.ModuleList()
+ for _ in range(self.num_feature_levels):
+ if in_channels != hidden_dim or enforce_input_project:
+ self.input_proj.append(nn.Conv2d(in_channels, hidden_dim, kernel_size=1))
+ weight_init.c2_xavier_fill(self.input_proj[-1])
+ else:
+ self.input_proj.append(nn.Sequential())
+
+ # output FFNs
+ if self.mask_classification:
+ self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
+ self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3)
+
+ def forward(self, x, mask_features, mask = None):
+ #print(mask_features.shape, "!!")
+ # x is a list of multi-scale feature
+ assert len(x) == self.num_feature_levels
+ src = []
+ pos = []
+ size_list = []
+
+ # disable mask, it does not affect performance
+ del mask
+
+ for i in range(self.num_feature_levels):
+ size_list.append(x[i].shape[-2:])
+ pos.append(self.pe_layer(x[i], None).flatten(2))
+ src.append(self.input_proj[i](x[i]).flatten(2) + self.level_embed.weight[i][None, :, None])
+
+ # flatten NxCxHxW to HWxNxC
+ pos[-1] = pos[-1].permute(2, 0, 1)
+ src[-1] = src[-1].permute(2, 0, 1)
+
+ _, bs, _ = src[0].shape
+
+ # QxNxC
+ query_embed = self.query_embed.weight.unsqueeze(1).repeat(1, bs, 1)
+ output = self.query_feat.weight.unsqueeze(1).repeat(1, bs, 1)
+
+ predictions_class = []
+ predictions_mask = []
+
+ # prediction heads on learnable query features
+ outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[0])
+ predictions_class.append(outputs_class)
+ predictions_mask.append(outputs_mask)
+
+ for i in range(self.num_layers):
+ level_index = i % self.num_feature_levels
+ attn_mask[torch.where(attn_mask.sum(-1) == attn_mask.shape[-1])] = False
+ # attention: cross-attention first
+ output = self.transformer_cross_attention_layers[i](
+ output, src[level_index],
+ memory_mask=attn_mask,
+ memory_key_padding_mask=None, # here we do not apply masking on padded region
+ pos=pos[level_index], query_pos=query_embed
+ )
+
+ output = self.transformer_self_attention_layers[i](
+ output, tgt_mask=None,
+ tgt_key_padding_mask=None,
+ query_pos=query_embed
+ )
+
+ # FFN
+ output = self.transformer_ffn_layers[i](
+ output
+ )
+ #print(output.shape, "??")
+
+ outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[(i + 1) % self.num_feature_levels])
+ predictions_class.append(outputs_class)
+ predictions_mask.append(outputs_mask)
+
+ assert len(predictions_class) == self.num_layers + 1
+
+ out = {
+ 'pred_logits': predictions_class[-1],
+ 'pred_masks': predictions_mask[-1],
+ 'aux_outputs': self._set_aux_loss(
+ predictions_class if self.mask_classification else None, predictions_mask
+ )
+ }
+ return out
+
+ def forward_prediction_heads(self, output, mask_features, attn_mask_target_size):
+ decoder_output = self.decoder_norm(output)
+ decoder_output = decoder_output.transpose(0, 1)
+ outputs_class = self.class_embed(decoder_output)
+ mask_embed = self.mask_embed(decoder_output)
+ outputs_mask = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features)
+
+ # NOTE: prediction is of higher-resolution
+ # [B, Q, H, W] -> [B, Q, H*W] -> [B, h, Q, H*W] -> [B*h, Q, HW]
+ attn_mask = F.interpolate(outputs_mask, size=attn_mask_target_size, mode="bilinear", align_corners=False)
+ # must use bool type
+ # If a BoolTensor is provided, positions with ``True`` are not allowed to attend while ``False`` values will be unchanged.
+ #print('before', attn_mask.shape)
+ #print(self.num_heads)
+ attn_mask = (attn_mask.sigmoid().flatten(2).unsqueeze(1).repeat(1, self.num_heads, 1, 1).flatten(0, 1) < 0.5).bool()
+ #print('after', attn_mask.shape)
+ attn_mask = attn_mask.detach()
+
+ return outputs_class, outputs_mask, attn_mask
+
+ @torch.jit.unused
+ def _set_aux_loss(self, outputs_class, outputs_seg_masks):
+ # this is a workaround to make torchscript happy, as torchscript
+ # doesn't support dictionary with non-homogeneous values, such
+ # as a dict having both a Tensor and a list.
+ if self.mask_classification:
+ return [
+ {"pred_logits": a, "pred_masks": b}
+ for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1])
+ ]
+ else:
+ return [{"pred_masks": b} for b in outputs_seg_masks[:-1]]
diff --git a/modeling/transformer_decoder/maskformer_transformer_decoder.py b/modeling/transformer_decoder/maskformer_transformer_decoder.py
new file mode 100644
index 0000000000000000000000000000000000000000..458f53321f42471005bd5f466ed992c245ffff4b
--- /dev/null
+++ b/modeling/transformer_decoder/maskformer_transformer_decoder.py
@@ -0,0 +1,123 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/detr.py
+import fvcore.nn.weight_init as weight_init
+import torch
+from torch import nn
+from torch.nn import functional as F
+
+from .position_encoding import PositionEmbeddingSine
+from .transformer import Transformer
+
+
+class StandardTransformerDecoder(nn.Module):
+ def __init__(
+ self,
+ in_channels,
+ num_classes,
+ mask_classification=True,
+ hidden_dim=256,
+ num_queries=100,
+ nheads=8,
+ dropout=0.0,
+ dim_feedforward=2048,
+ enc_layers=0,
+ dec_layers=10,
+ pre_norm=False,
+ deep_supervision=True,
+ mask_dim=256,
+ enforce_input_project=False
+ ):
+ super().__init__()
+ self.mask_classification = mask_classification
+ # positional encoding
+ N_steps = hidden_dim // 2
+ self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
+
+ transformer = Transformer(
+ d_model=hidden_dim,
+ dropout=dropout,
+ nhead=nheads,
+ dim_feedforward=dim_feedforward,
+ num_encoder_layers=enc_layers,
+ num_decoder_layers=dec_layers,
+ normalize_before=pre_norm,
+ return_intermediate_dec=deep_supervision,
+ )
+
+ self.num_queries = num_queries
+ self.transformer = transformer
+ hidden_dim = transformer.d_model
+
+ self.query_embed = nn.Embedding(num_queries, hidden_dim)
+
+ if in_channels != hidden_dim or enforce_input_project:
+ self.input_proj = nn.Conv3d(in_channels, hidden_dim, kernel_size=1)
+ weight_init.c2_xavier_fill(self.input_proj)
+ else:
+ self.input_proj = nn.Sequential()
+ self.aux_loss = deep_supervision
+
+ # output FFNs
+ if self.mask_classification:
+ self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
+ self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3)
+
+ def forward(self, x, mask_features, mask=None):
+ if mask is not None:
+ mask = F.interpolate(mask[None].float(), size=x.shape[-2:]).to(torch.bool)[0]
+ pos = self.pe_layer(x, mask)
+
+ src = x
+ hs, memory = self.transformer(self.input_proj(src), mask, self.query_embed.weight, pos)
+
+ if self.mask_classification:
+ outputs_class = self.class_embed(hs)
+ out = {"pred_logits": outputs_class[-1]}
+ else:
+ out = {}
+
+ if self.aux_loss:
+ # [l, bs, queries, embed]
+ mask_embed = self.mask_embed(hs)
+ outputs_seg_masks = torch.einsum("lbqc,bchw->lbqhw", mask_embed, mask_features)
+ out["pred_masks"] = outputs_seg_masks[-1]
+ out["aux_outputs"] = self._set_aux_loss(
+ outputs_class if self.mask_classification else None, outputs_seg_masks
+ )
+ else:
+ # FIXME h_boxes takes the last one computed, keep this in mind
+ # [bs, queries, embed]
+ mask_embed = self.mask_embed(hs[-1])
+ outputs_seg_masks = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features)
+ out["pred_masks"] = outputs_seg_masks
+ return out
+
+ @torch.jit.unused
+ def _set_aux_loss(self, outputs_class, outputs_seg_masks):
+ # this is a workaround to make torchscript happy, as torchscript
+ # doesn't support dictionary with non-homogeneous values, such
+ # as a dict having both a Tensor and a list.
+ if self.mask_classification:
+ return [
+ {"pred_logits": a, "pred_masks": b}
+ for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1])
+ ]
+ else:
+ return [{"pred_masks": b} for b in outputs_seg_masks[:-1]]
+
+
+class MLP(nn.Module):
+ """Very simple multi-layer perceptron (also called FFN)"""
+
+ def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
+ super().__init__()
+ self.num_layers = num_layers
+ h = [hidden_dim] * (num_layers - 1)
+ self.layers = nn.ModuleList(
+ nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim])
+ )
+
+ def forward(self, x):
+ for i, layer in enumerate(self.layers):
+ x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
+ return x
diff --git a/modeling/transformer_decoder/position_encoding.py b/modeling/transformer_decoder/position_encoding.py
new file mode 100644
index 0000000000000000000000000000000000000000..c27d3c46b260252c9d0d94f15b900e7615032c0f
--- /dev/null
+++ b/modeling/transformer_decoder/position_encoding.py
@@ -0,0 +1,64 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# # Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/position_encoding.py
+"""
+Various positional encodings for the transformer.
+"""
+import math
+
+import torch
+from torch import nn
+
+
+class PositionEmbeddingSine(nn.Module):
+ """
+ This is a more standard version of the position embedding, very similar to the one
+ used by the Attention is all you need paper, generalized to work on images.
+ """
+
+ def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
+ super().__init__()
+ self.num_pos_feats = num_pos_feats
+ self.temperature = temperature
+ self.normalize = normalize
+ if scale is not None and normalize is False:
+ raise ValueError("normalize should be True if scale is passed")
+ if scale is None:
+ scale = 2 * math.pi
+ self.scale = scale
+
+ def forward(self, x, mask=None):
+ if mask is None:
+ mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool)
+ not_mask = ~mask
+ y_embed = not_mask.cumsum(1, dtype=torch.float32)
+ x_embed = not_mask.cumsum(2, dtype=torch.float32)
+ if self.normalize:
+ eps = 1e-6
+ y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
+ x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
+
+ dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
+ dim_t = self.temperature ** (2 * (torch.div(dim_t, 2, rounding_mode='floor')) / self.num_pos_feats)
+
+ pos_x = x_embed[:, :, :, None] / dim_t
+ pos_y = y_embed[:, :, :, None] / dim_t
+ pos_x = torch.stack(
+ (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
+ ).flatten(3)
+ pos_y = torch.stack(
+ (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
+ ).flatten(3)
+ pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
+ return pos
+
+ def __repr__(self, _repr_indent=4):
+ head = "Positional encoding " + self.__class__.__name__
+ body = [
+ "num_pos_feats: {}".format(self.num_pos_feats),
+ "temperature: {}".format(self.temperature),
+ "normalize: {}".format(self.normalize),
+ "scale: {}".format(self.scale),
+ ]
+ # _repr_indent = 4
+ lines = [head] + [" " * _repr_indent + line for line in body]
+ return "\n".join(lines)
\ No newline at end of file
diff --git a/modeling/transformer_decoder/transformer.py b/modeling/transformer_decoder/transformer.py
new file mode 100644
index 0000000000000000000000000000000000000000..ea8caa0108f5e136a9739320ab69a3e1b6f40298
--- /dev/null
+++ b/modeling/transformer_decoder/transformer.py
@@ -0,0 +1,369 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/transformer.py
+"""
+Transformer class.
+
+Copy-paste from torch.nn.Transformer with modifications:
+ * positional encodings are passed in MHattention
+ * extra LN at the end of encoder is removed
+ * decoder returns a stack of activations from all decoding layers
+"""
+import copy
+from typing import List, Optional
+
+import torch
+import torch.nn.functional as F
+from torch import Tensor, nn
+
+
+class Transformer(nn.Module):
+ def __init__(
+ self,
+ d_model=512,
+ nhead=8,
+ num_encoder_layers=6,
+ num_decoder_layers=6,
+ dim_feedforward=2048,
+ dropout=0.1,
+ activation="relu",
+ normalize_before=False,
+ return_intermediate_dec=False,
+ ):
+ super().__init__()
+
+ encoder_layer = TransformerEncoderLayer(
+ d_model, nhead, dim_feedforward, dropout, activation, normalize_before
+ )
+ encoder_norm = nn.LayerNorm(d_model) if normalize_before else None
+ self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm)
+
+ decoder_layer = TransformerDecoderLayer(
+ d_model, nhead, dim_feedforward, dropout, activation, normalize_before
+ )
+ decoder_norm = nn.LayerNorm(d_model)
+ self.decoder = TransformerDecoder(
+ decoder_layer,
+ num_decoder_layers,
+ decoder_norm,
+ return_intermediate=return_intermediate_dec,
+ )
+
+ self._reset_parameters()
+
+ self.d_model = d_model
+ self.nhead = nhead
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ nn.init.xavier_uniform_(p)
+
+ def forward(self, src, mask, query_embed, pos_embed):
+ # flatten NxCxHxW to HWxNxC
+ bs, c, h, w = src.shape
+ src = src.flatten(2).permute(2, 0, 1)
+ pos_embed = pos_embed.flatten(2).permute(2, 0, 1)
+ query_embed = query_embed.unsqueeze(1).repeat(1, bs, 1)
+ if mask is not None:
+ mask = mask.flatten(1)
+
+ tgt = torch.zeros_like(query_embed)
+ memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed)
+ hs = self.decoder(
+ tgt, memory, memory_key_padding_mask=mask, pos=pos_embed, query_pos=query_embed
+ )
+ return hs.transpose(1, 2), memory.permute(1, 2, 0).view(bs, c, h, w)
+
+
+class TransformerEncoder(nn.Module):
+ def __init__(self, encoder_layer, num_layers, norm=None):
+ super().__init__()
+ self.layers = _get_clones(encoder_layer, num_layers)
+ self.num_layers = num_layers
+ self.norm = norm
+
+ def forward(
+ self,
+ src,
+ mask: Optional[Tensor] = None,
+ src_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ ):
+ output = src
+
+ for layer in self.layers:
+ output = layer(
+ output, src_mask=mask, src_key_padding_mask=src_key_padding_mask, pos=pos
+ )
+
+ if self.norm is not None:
+ output = self.norm(output)
+
+ return output
+
+
+class TransformerDecoder(nn.Module):
+ def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False):
+ super().__init__()
+ self.layers = _get_clones(decoder_layer, num_layers)
+ self.num_layers = num_layers
+ self.norm = norm
+ self.return_intermediate = return_intermediate
+
+ def forward(
+ self,
+ tgt,
+ memory,
+ tgt_mask: Optional[Tensor] = None,
+ memory_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None,
+ ):
+ output = tgt
+
+ intermediate = []
+
+ for layer in self.layers:
+ output = layer(
+ output,
+ memory,
+ tgt_mask=tgt_mask,
+ memory_mask=memory_mask,
+ tgt_key_padding_mask=tgt_key_padding_mask,
+ memory_key_padding_mask=memory_key_padding_mask,
+ pos=pos,
+ query_pos=query_pos,
+ )
+ if self.return_intermediate:
+ intermediate.append(self.norm(output))
+
+ if self.norm is not None:
+ output = self.norm(output)
+ if self.return_intermediate:
+ intermediate.pop()
+ intermediate.append(output)
+
+ if self.return_intermediate:
+ return torch.stack(intermediate)
+
+ return output.unsqueeze(0)
+
+
+class TransformerEncoderLayer(nn.Module):
+ def __init__(
+ self,
+ d_model,
+ nhead,
+ dim_feedforward=2048,
+ dropout=0.1,
+ activation="relu",
+ normalize_before=False,
+ ):
+ super().__init__()
+ self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
+ # Implementation of Feedforward model
+ self.linear1 = nn.Linear(d_model, dim_feedforward)
+ self.dropout = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(dim_feedforward, d_model)
+
+ self.norm1 = nn.LayerNorm(d_model)
+ self.norm2 = nn.LayerNorm(d_model)
+ self.dropout1 = nn.Dropout(dropout)
+ self.dropout2 = nn.Dropout(dropout)
+
+ self.activation = _get_activation_fn(activation)
+ self.normalize_before = normalize_before
+
+ def with_pos_embed(self, tensor, pos: Optional[Tensor]):
+ return tensor if pos is None else tensor + pos
+
+ def forward_post(
+ self,
+ src,
+ src_mask: Optional[Tensor] = None,
+ src_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ ):
+ q = k = self.with_pos_embed(src, pos)
+ src2 = self.self_attn(
+ q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask
+ )[0]
+ src = src + self.dropout1(src2)
+ src = self.norm1(src)
+ src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
+ src = src + self.dropout2(src2)
+ src = self.norm2(src)
+ return src
+
+ def forward_pre(
+ self,
+ src,
+ src_mask: Optional[Tensor] = None,
+ src_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ ):
+ src2 = self.norm1(src)
+ q = k = self.with_pos_embed(src2, pos)
+ src2 = self.self_attn(
+ q, k, value=src2, attn_mask=src_mask, key_padding_mask=src_key_padding_mask
+ )[0]
+ src = src + self.dropout1(src2)
+ src2 = self.norm2(src)
+ src2 = self.linear2(self.dropout(self.activation(self.linear1(src2))))
+ src = src + self.dropout2(src2)
+ return src
+
+ def forward(
+ self,
+ src,
+ src_mask: Optional[Tensor] = None,
+ src_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ ):
+ if self.normalize_before:
+ return self.forward_pre(src, src_mask, src_key_padding_mask, pos)
+ return self.forward_post(src, src_mask, src_key_padding_mask, pos)
+
+
+class TransformerDecoderLayer(nn.Module):
+ def __init__(
+ self,
+ d_model,
+ nhead,
+ dim_feedforward=2048,
+ dropout=0.1,
+ activation="relu",
+ normalize_before=False,
+ ):
+ super().__init__()
+ self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
+ self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
+ # Implementation of Feedforward model
+ self.linear1 = nn.Linear(d_model, dim_feedforward)
+ self.dropout = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(dim_feedforward, d_model)
+
+ self.norm1 = nn.LayerNorm(d_model)
+ self.norm2 = nn.LayerNorm(d_model)
+ self.norm3 = nn.LayerNorm(d_model)
+ self.dropout1 = nn.Dropout(dropout)
+ self.dropout2 = nn.Dropout(dropout)
+ self.dropout3 = nn.Dropout(dropout)
+
+ self.activation = _get_activation_fn(activation)
+ self.normalize_before = normalize_before
+
+ def with_pos_embed(self, tensor, pos: Optional[Tensor]):
+ return tensor if pos is None else tensor + pos
+
+ def forward_post(
+ self,
+ tgt,
+ memory,
+ tgt_mask: Optional[Tensor] = None,
+ memory_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None,
+ ):
+ q = k = self.with_pos_embed(tgt, query_pos)
+ tgt2 = self.self_attn(
+ q, k, value=tgt, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask
+ )[0]
+ tgt = tgt + self.dropout1(tgt2)
+ tgt = self.norm1(tgt)
+ tgt2 = self.multihead_attn(
+ query=self.with_pos_embed(tgt, query_pos),
+ key=self.with_pos_embed(memory, pos),
+ value=memory,
+ attn_mask=memory_mask,
+ key_padding_mask=memory_key_padding_mask,
+ )[0]
+ tgt = tgt + self.dropout2(tgt2)
+ tgt = self.norm2(tgt)
+ tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
+ tgt = tgt + self.dropout3(tgt2)
+ tgt = self.norm3(tgt)
+ return tgt
+
+ def forward_pre(
+ self,
+ tgt,
+ memory,
+ tgt_mask: Optional[Tensor] = None,
+ memory_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None,
+ ):
+ tgt2 = self.norm1(tgt)
+ q = k = self.with_pos_embed(tgt2, query_pos)
+ tgt2 = self.self_attn(
+ q, k, value=tgt2, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask
+ )[0]
+ tgt = tgt + self.dropout1(tgt2)
+ tgt2 = self.norm2(tgt)
+ tgt2 = self.multihead_attn(
+ query=self.with_pos_embed(tgt2, query_pos),
+ key=self.with_pos_embed(memory, pos),
+ value=memory,
+ attn_mask=memory_mask,
+ key_padding_mask=memory_key_padding_mask,
+ )[0]
+ tgt = tgt + self.dropout2(tgt2)
+ tgt2 = self.norm3(tgt)
+ tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
+ tgt = tgt + self.dropout3(tgt2)
+ return tgt
+
+ def forward(
+ self,
+ tgt,
+ memory,
+ tgt_mask: Optional[Tensor] = None,
+ memory_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None,
+ ):
+ if self.normalize_before:
+ return self.forward_pre(
+ tgt,
+ memory,
+ tgt_mask,
+ memory_mask,
+ tgt_key_padding_mask,
+ memory_key_padding_mask,
+ pos,
+ query_pos,
+ )
+ return self.forward_post(
+ tgt,
+ memory,
+ tgt_mask,
+ memory_mask,
+ tgt_key_padding_mask,
+ memory_key_padding_mask,
+ pos,
+ query_pos,
+ )
+
+
+def _get_clones(module, N):
+ return nn.ModuleList([copy.deepcopy(module) for i in range(N)])
+
+
+def _get_activation_fn(activation):
+ """Return an activation function given a string"""
+ if activation == "relu":
+ return F.relu
+ if activation == "gelu":
+ return F.gelu
+ if activation == "glu":
+ return F.glu
+ raise RuntimeError(f"activation should be relu/gelu, not {activation}.")
diff --git a/modeling_cuda/MaskFormerModel.py b/modeling_cuda/MaskFormerModel.py
new file mode 100644
index 0000000000000000000000000000000000000000..b6396b37866c4ac245cc206b993aeb57b1b20871
--- /dev/null
+++ b/modeling_cuda/MaskFormerModel.py
@@ -0,0 +1,103 @@
+#!/usr/bin/env python
+# -*- encoding: utf-8 -*-
+'''
+@File : MaskFormerModel.py
+@Time : 2022/09/30 20:50:53
+@Author : BQH
+@Version : 1.0
+@Contact : raogx.vip@hotmail.com
+@License : (C)Copyright 2017-2018, Liugroup-NLPR-CASIA
+@Desc : 基于DeformTransAtten的分割网络
+'''
+
+# here put the import lib
+from torch import nn
+from addict import Dict
+
+from .backbone.resnet import ResNet, resnet_spec
+from .pixel_decoder.msdeformattn import MSDeformAttnPixelDecoder
+from .transformer_decoder.mask2former_transformer_decoder import MultiScaleMaskedTransformerDecoder
+
+
+class MaskFormerHead(nn.Module):
+ def __init__(self, cfg, input_shape):
+ super().__init__()
+ self.pixel_decoder = self.pixel_decoder_init(cfg, input_shape)
+ self.predictor = self.predictor_init(cfg)
+
+ def pixel_decoder_init(self, cfg, input_shape):
+ common_stride = cfg.MODEL.SEM_SEG_HEAD.COMMON_STRIDE
+ transformer_dropout = cfg.MODEL.MASK_FORMER.DROPOUT
+ transformer_nheads = cfg.MODEL.MASK_FORMER.NHEADS
+ transformer_dim_feedforward = 1024
+ transformer_enc_layers = cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS
+ conv_dim = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM
+ mask_dim = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM
+ transformer_in_features = cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES # ["res3", "res4", "res5"]
+
+ pixel_decoder = MSDeformAttnPixelDecoder(input_shape,
+ transformer_dropout,
+ transformer_nheads,
+ transformer_dim_feedforward,
+ transformer_enc_layers,
+ conv_dim,
+ mask_dim,
+ transformer_in_features,
+ common_stride)
+ return pixel_decoder
+
+ def predictor_init(self, cfg):
+ in_channels = cfg.MODEL.SEM_SEG_HEAD.CONVS_DIM
+ num_classes = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES
+ hidden_dim = cfg.MODEL.MASK_FORMER.HIDDEN_DIM
+ num_queries = cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES
+ nheads = cfg.MODEL.MASK_FORMER.NHEADS
+ dim_feedforward = cfg.MODEL.MASK_FORMER.DIM_FEEDFORWARD
+ dec_layers = cfg.MODEL.MASK_FORMER.DEC_LAYERS - 1
+ pre_norm = cfg.MODEL.MASK_FORMER.PRE_NORM
+ mask_dim = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM
+ enforce_input_project = False
+ mask_classification = True
+ predictor = MultiScaleMaskedTransformerDecoder(in_channels,
+ num_classes,
+ mask_classification,
+ hidden_dim,
+ num_queries,
+ nheads,
+ dim_feedforward,
+ dec_layers,
+ pre_norm,
+ mask_dim,
+ enforce_input_project)
+ return predictor
+
+ def forward(self, features, mask=None):
+ mask_features, transformer_encoder_features, multi_scale_features = self.pixel_decoder.forward_features(features)
+ predictions = self.predictor(multi_scale_features, mask_features, mask)
+ return predictions, mask_features
+
+class MaskFormerModel(nn.Module):
+ def __init__(self, cfg):
+ super().__init__()
+ self.backbone = self.build_backbone(cfg)
+ self.sem_seg_head = MaskFormerHead(cfg, self.backbone_feature_shape)
+
+ def build_backbone(self, cfg):
+ model_type = cfg.MODEL.BACKBONE.TYPE
+ assert model_type == 'resnet18' or model_type == 'resnet34' or model_type == 'resnet50', 'Do not support model type!'
+
+ channels = [64, 128, 256, 512]
+ if int(model_type[6:]) > 34:
+ channels = [item * 4 for item in channels]
+
+ backbone = ResNet(resnet_spec[model_type][0], resnet_spec[model_type][1])
+ # backbone.init_weights()
+ self.backbone_feature_shape = dict()
+ for i, channel in enumerate(channels):
+ self.backbone_feature_shape[f'res{i+2}'] = Dict({'channel': channel, 'stride': 2**(i+2)})
+ return backbone
+
+ def forward(self, inputs):
+ features = self.backbone(inputs)
+ outputs = self.sem_seg_head(features)
+ return outputs
diff --git a/modeling_cuda/__init__.py b/modeling_cuda/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..b1bbc6c11478da359b31f5f21ecfaa4e82d7ff86
--- /dev/null
+++ b/modeling_cuda/__init__.py
@@ -0,0 +1,2 @@
+from .backbone.resnet import ResNet
+from .MaskFormerModel import MaskFormerModel
diff --git a/modeling_cuda/__pycache__/MaskFormerModel.cpython-37.pyc b/modeling_cuda/__pycache__/MaskFormerModel.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..cb8923574c868e3555d4f3fc6b9410d0b5784590
Binary files /dev/null and b/modeling_cuda/__pycache__/MaskFormerModel.cpython-37.pyc differ
diff --git a/modeling_cuda/__pycache__/__init__.cpython-37.pyc b/modeling_cuda/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..8b0775bb499f28c09e47ea2cae6c304d7a77aae4
Binary files /dev/null and b/modeling_cuda/__pycache__/__init__.cpython-37.pyc differ
diff --git a/modeling_cuda/backbone/__pycache__/resnet.cpython-37.pyc b/modeling_cuda/backbone/__pycache__/resnet.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..0d3723c508f4c623531c8880cc226a34bd0aec40
Binary files /dev/null and b/modeling_cuda/backbone/__pycache__/resnet.cpython-37.pyc differ
diff --git a/modeling_cuda/backbone/resnet.py b/modeling_cuda/backbone/resnet.py
new file mode 100644
index 0000000000000000000000000000000000000000..96718cb7fbdc6a1ba244b66d4f684e6521fd17d7
--- /dev/null
+++ b/modeling_cuda/backbone/resnet.py
@@ -0,0 +1,190 @@
+#!/usr/bin/env python
+# -*- encoding: utf-8 -*-
+'''
+@File : resnet.py
+@Time : 2022/04/23 14:08:10
+@Author : BQH
+@Version : 1.0
+@Contact : raogx.vip@hotmail.com
+@License : (C)Copyright 2017-2018, Liugroup-NLPR-CASIA
+@Desc : Backbone
+'''
+
+# here put the import lib
+import torch
+import torch.nn as nn
+from addict import Dict
+import torch.utils.model_zoo as model_zoo
+
+BN_MOMENTUM = 0.1
+
+model_urls = {'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
+ 'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
+ 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
+ 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
+ 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth', }
+
+
+def conv3x3(in_planes, out_planes, stride=1):
+ """3x3 convolution with padding"""
+ return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False)
+
+
+class InvertedResidual(nn.Module):
+ def __init__(self, in_channels, hidden_dim, out_channels=3):
+ super(InvertedResidual, self).__init__()
+
+ self.conv = nn.Sequential(
+ nn.Conv2d(in_channels, hidden_dim, kernel_size=1, stride=1, padding=0, bias=True),
+ nn.BatchNorm2d(hidden_dim, momentum=BN_MOMENTUM),
+ nn.ReLU6(inplace=True),
+
+ # dw
+ # nn.Conv2d(hidden_dim, hidden_dim, kernel_size=3, stride=1, padding=1, bias=False),
+ # nn.BatchNorm2d(hidden_dim, momentum=BN_MOMENTUM),
+ # nn.ReLU(inplace=True),
+
+ # pw-linear
+ nn.Conv2d(hidden_dim, out_channels, kernel_size=1, stride=1, padding=0, bias=False),
+ nn.BatchNorm2d(out_channels, momentum=BN_MOMENTUM),
+ nn.ReLU(inplace=True)
+ )
+
+ def forward(self, x):
+ return self.conv(x)
+
+
+class BasicBlock(nn.Module):
+ expansion = 1
+
+ def __init__(self, inplanes, planes, stride=1, downsample=None):
+ super(BasicBlock, self).__init__()
+ self.conv1 = conv3x3(inplanes, planes, stride)
+ self.bn1 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)
+ self.relu = nn.ReLU(inplace=True)
+ self.conv2 = conv3x3(planes, planes)
+ self.bn2 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)
+ self.downsample = downsample
+ self.stride = stride
+
+ def forward(self, x):
+ residual = x
+
+ out = self.conv1(x)
+ out = self.bn1(out)
+ out = self.relu(out)
+
+ out = self.conv2(out)
+ out = self.bn2(out)
+
+ if self.downsample is not None:
+ residual = self.downsample(x)
+
+ out += residual
+ out = self.relu(out)
+
+ return out
+
+
+class Bottleneck(nn.Module):
+ expansion = 4
+
+ def __init__(self, inplanes, planes, stride=1, downsample=None):
+ super(Bottleneck, self).__init__()
+ self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)
+ self.bn1 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)
+ self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
+ self.bn2 = nn.BatchNorm2d(planes, momentum=BN_MOMENTUM)
+ self.conv3 = nn.Conv2d(planes, planes * self.expansion, kernel_size=1, bias=False)
+ self.bn3 = nn.BatchNorm2d(planes * self.expansion, momentum=BN_MOMENTUM)
+ self.relu = nn.ReLU(inplace=True)
+ self.downsample = downsample
+ self.stride = stride
+
+ def forward(self, x):
+ residual = x
+
+ out = self.conv1(x)
+ out = self.bn1(out)
+ out = self.relu(out)
+
+ out = self.conv2(out)
+ out = self.bn2(out)
+ out = self.relu(out)
+
+ out = self.conv3(out)
+ out = self.bn3(out)
+
+ if self.downsample is not None:
+ residual = self.downsample(x)
+
+ out += residual
+ out = self.relu(out)
+
+ return out
+
+
+class ResNet(nn.Module):
+ def __init__(self, block, layers):
+ super(ResNet, self).__init__()
+ self.inplanes = 64
+
+ self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False)
+ self.bn1 = nn.BatchNorm2d(64, momentum=BN_MOMENTUM)
+ self.relu = nn.ReLU(inplace=True)
+ self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
+ self.layer1 = self._make_layer(block, 64, layers[0])
+ self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
+ self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
+ self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
+
+ def _make_layer(self, block, planes, blocks, stride=1):
+ downsample = None
+ if stride != 1 or self.inplanes != planes * block.expansion:
+ downsample = nn.Sequential(nn.Conv2d(self.inplanes, planes * block.expansion,
+ kernel_size=1, stride=stride, bias=False),
+ nn.BatchNorm2d(planes * block.expansion, momentum=BN_MOMENTUM))
+
+ layers = []
+ layers.append(block(self.inplanes, planes, stride, downsample))
+ self.inplanes = planes * block.expansion
+ for i in range(1, blocks):
+ layers.append(block(self.inplanes, planes))
+ return nn.Sequential(*layers)
+
+ def forward(self, input_x):
+ out = {}
+ x = self.conv1(input_x)
+ x = self.bn1(x)
+ x = self.relu(x)
+ feature1 = self.maxpool(x)
+
+ feature2 = self.layer1(feature1)
+ out['res2'] = feature2
+
+ feature3 = self.layer2(feature2)
+ out['res3'] = feature3
+
+ feature4 = self.layer3(feature3)
+ out['res4'] = feature4
+
+ feature5 = self.layer4(feature4)
+ out['res5'] = feature5
+
+ return out
+
+ def init_weights(self, num_layers=50):
+ # url = model_urls['resnet{}'.format(num_layers)]
+ # pretrained_state_dict = model_zoo.load_url(url, model_dir='/home/code/pytorch_model/')
+ # print('=> loading pretrained model {}'.format(url))
+ pertained_model = r'/home/code/pytorch_model/resnet50-19c8e357.pth'
+ pretrained_state_dict = torch.load(pertained_model)
+
+ self.load_state_dict(pretrained_state_dict, strict=False)
+
+
+resnet_spec = {'resnet18': (BasicBlock, [2, 2, 2, 2]),
+ 'resnet34': (BasicBlock, [3, 4, 6, 3]),
+ 'resnet50': (Bottleneck, [3, 4, 6, 3]),
+ 'resnet101': (Bottleneck, [3, 4, 23, 3]),
+ 'resnet152': (Bottleneck, [3, 8, 36, 3])}
\ No newline at end of file
diff --git a/modeling_cuda/backbone/swin.py b/modeling_cuda/backbone/swin.py
new file mode 100644
index 0000000000000000000000000000000000000000..3b099d84396ac31d22881e5b6c9e53d2d0abaef3
--- /dev/null
+++ b/modeling_cuda/backbone/swin.py
@@ -0,0 +1,770 @@
+# --------------------------------------------------------
+# Swin Transformer
+# Copyright (c) 2021 Microsoft
+# Licensed under The MIT License [see LICENSE for details]
+# Written by Ze Liu, Yutong Lin, Yixuan Wei
+# --------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/SwinTransformer/Swin-Transformer-Semantic-Segmentation/blob/main/mmseg/models/backbones/swin_transformer.py
+
+import numpy as np
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.utils.checkpoint as checkpoint
+from timm.models.layers import DropPath, to_2tuple, trunc_normal_
+
+from detectron2.modeling import BACKBONE_REGISTRY, Backbone, ShapeSpec
+
+
+class Mlp(nn.Module):
+ """Multilayer perceptron."""
+
+ def __init__(
+ self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0
+ ):
+ super().__init__()
+ out_features = out_features or in_features
+ hidden_features = hidden_features or in_features
+ self.fc1 = nn.Linear(in_features, hidden_features)
+ self.act = act_layer()
+ self.fc2 = nn.Linear(hidden_features, out_features)
+ self.drop = nn.Dropout(drop)
+
+ def forward(self, x):
+ x = self.fc1(x)
+ x = self.act(x)
+ x = self.drop(x)
+ x = self.fc2(x)
+ x = self.drop(x)
+ return x
+
+
+def window_partition(x, window_size):
+ """
+ Args:
+ x: (B, H, W, C)
+ window_size (int): window size
+ Returns:
+ windows: (num_windows*B, window_size, window_size, C)
+ """
+ B, H, W, C = x.shape
+ x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
+ windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
+ return windows
+
+
+def window_reverse(windows, window_size, H, W):
+ """
+ Args:
+ windows: (num_windows*B, window_size, window_size, C)
+ window_size (int): Window size
+ H (int): Height of image
+ W (int): Width of image
+ Returns:
+ x: (B, H, W, C)
+ """
+ B = int(windows.shape[0] / (H * W / window_size / window_size))
+ x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
+ x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
+ return x
+
+
+class WindowAttention(nn.Module):
+ """Window based multi-head self attention (W-MSA) module with relative position bias.
+ It supports both of shifted and non-shifted window.
+ Args:
+ dim (int): Number of input channels.
+ window_size (tuple[int]): The height and width of the window.
+ num_heads (int): Number of attention heads.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
+ attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
+ proj_drop (float, optional): Dropout ratio of output. Default: 0.0
+ """
+
+ def __init__(
+ self,
+ dim,
+ window_size,
+ num_heads,
+ qkv_bias=True,
+ qk_scale=None,
+ attn_drop=0.0,
+ proj_drop=0.0,
+ ):
+
+ super().__init__()
+ self.dim = dim
+ self.window_size = window_size # Wh, Ww
+ self.num_heads = num_heads
+ head_dim = dim // num_heads
+ self.scale = qk_scale or head_dim ** -0.5
+
+ # define a parameter table of relative position bias
+ self.relative_position_bias_table = nn.Parameter(
+ torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)
+ ) # 2*Wh-1 * 2*Ww-1, nH
+
+ # get pair-wise relative position index for each token inside the window
+ coords_h = torch.arange(self.window_size[0])
+ coords_w = torch.arange(self.window_size[1])
+ coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
+ coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
+ relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
+ relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
+ relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
+ relative_coords[:, :, 1] += self.window_size[1] - 1
+ relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
+ relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
+ self.register_buffer("relative_position_index", relative_position_index)
+
+ self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
+ self.attn_drop = nn.Dropout(attn_drop)
+ self.proj = nn.Linear(dim, dim)
+ self.proj_drop = nn.Dropout(proj_drop)
+
+ trunc_normal_(self.relative_position_bias_table, std=0.02)
+ self.softmax = nn.Softmax(dim=-1)
+
+ def forward(self, x, mask=None):
+ """Forward function.
+ Args:
+ x: input features with shape of (num_windows*B, N, C)
+ mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
+ """
+ B_, N, C = x.shape
+ qkv = (
+ self.qkv(x)
+ .reshape(B_, N, 3, self.num_heads, C // self.num_heads)
+ .permute(2, 0, 3, 1, 4)
+ )
+ q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
+
+ q = q * self.scale
+ attn = q @ k.transpose(-2, -1)
+
+ relative_position_bias = self.relative_position_bias_table[
+ self.relative_position_index.view(-1)
+ ].view(
+ self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1
+ ) # Wh*Ww,Wh*Ww,nH
+ relative_position_bias = relative_position_bias.permute(
+ 2, 0, 1
+ ).contiguous() # nH, Wh*Ww, Wh*Ww
+ attn = attn + relative_position_bias.unsqueeze(0)
+
+ if mask is not None:
+ nW = mask.shape[0]
+ attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
+ attn = attn.view(-1, self.num_heads, N, N)
+ attn = self.softmax(attn)
+ else:
+ attn = self.softmax(attn)
+
+ attn = self.attn_drop(attn)
+
+ x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
+ x = self.proj(x)
+ x = self.proj_drop(x)
+ return x
+
+
+class SwinTransformerBlock(nn.Module):
+ """Swin Transformer Block.
+ Args:
+ dim (int): Number of input channels.
+ num_heads (int): Number of attention heads.
+ window_size (int): Window size.
+ shift_size (int): Shift size for SW-MSA.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float, optional): Stochastic depth rate. Default: 0.0
+ act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(
+ self,
+ dim,
+ num_heads,
+ window_size=7,
+ shift_size=0,
+ mlp_ratio=4.0,
+ qkv_bias=True,
+ qk_scale=None,
+ drop=0.0,
+ attn_drop=0.0,
+ drop_path=0.0,
+ act_layer=nn.GELU,
+ norm_layer=nn.LayerNorm,
+ ):
+ super().__init__()
+ self.dim = dim
+ self.num_heads = num_heads
+ self.window_size = window_size
+ self.shift_size = shift_size
+ self.mlp_ratio = mlp_ratio
+ assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
+
+ self.norm1 = norm_layer(dim)
+ self.attn = WindowAttention(
+ dim,
+ window_size=to_2tuple(self.window_size),
+ num_heads=num_heads,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ attn_drop=attn_drop,
+ proj_drop=drop,
+ )
+
+ self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
+ self.norm2 = norm_layer(dim)
+ mlp_hidden_dim = int(dim * mlp_ratio)
+ self.mlp = Mlp(
+ in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop
+ )
+
+ self.H = None
+ self.W = None
+
+ def forward(self, x, mask_matrix):
+ """Forward function.
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ mask_matrix: Attention mask for cyclic shift.
+ """
+ B, L, C = x.shape
+ H, W = self.H, self.W
+ assert L == H * W, "input feature has wrong size"
+
+ shortcut = x
+ x = self.norm1(x)
+ x = x.view(B, H, W, C)
+
+ # pad feature maps to multiples of window size
+ pad_l = pad_t = 0
+ pad_r = (self.window_size - W % self.window_size) % self.window_size
+ pad_b = (self.window_size - H % self.window_size) % self.window_size
+ x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
+ _, Hp, Wp, _ = x.shape
+
+ # cyclic shift
+ if self.shift_size > 0:
+ shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
+ attn_mask = mask_matrix
+ else:
+ shifted_x = x
+ attn_mask = None
+
+ # partition windows
+ x_windows = window_partition(
+ shifted_x, self.window_size
+ ) # nW*B, window_size, window_size, C
+ x_windows = x_windows.view(
+ -1, self.window_size * self.window_size, C
+ ) # nW*B, window_size*window_size, C
+
+ # W-MSA/SW-MSA
+ attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
+
+ # merge windows
+ attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
+ shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
+
+ # reverse cyclic shift
+ if self.shift_size > 0:
+ x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
+ else:
+ x = shifted_x
+
+ if pad_r > 0 or pad_b > 0:
+ x = x[:, :H, :W, :].contiguous()
+
+ x = x.view(B, H * W, C)
+
+ # FFN
+ x = shortcut + self.drop_path(x)
+ x = x + self.drop_path(self.mlp(self.norm2(x)))
+
+ return x
+
+
+class PatchMerging(nn.Module):
+ """Patch Merging Layer
+ Args:
+ dim (int): Number of input channels.
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(self, dim, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.dim = dim
+ self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
+ self.norm = norm_layer(4 * dim)
+
+ def forward(self, x, H, W):
+ """Forward function.
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ """
+ B, L, C = x.shape
+ assert L == H * W, "input feature has wrong size"
+
+ x = x.view(B, H, W, C)
+
+ # padding
+ pad_input = (H % 2 == 1) or (W % 2 == 1)
+ if pad_input:
+ x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
+
+ x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
+ x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
+ x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
+ x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
+ x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
+ x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
+
+ x = self.norm(x)
+ x = self.reduction(x)
+
+ return x
+
+
+class BasicLayer(nn.Module):
+ """A basic Swin Transformer layer for one stage.
+ Args:
+ dim (int): Number of feature channels
+ depth (int): Depths of this stage.
+ num_heads (int): Number of attention head.
+ window_size (int): Local window size. Default: 7.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
+ use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
+ """
+
+ def __init__(
+ self,
+ dim,
+ depth,
+ num_heads,
+ window_size=7,
+ mlp_ratio=4.0,
+ qkv_bias=True,
+ qk_scale=None,
+ drop=0.0,
+ attn_drop=0.0,
+ drop_path=0.0,
+ norm_layer=nn.LayerNorm,
+ downsample=None,
+ use_checkpoint=False,
+ ):
+ super().__init__()
+ self.window_size = window_size
+ self.shift_size = window_size // 2
+ self.depth = depth
+ self.use_checkpoint = use_checkpoint
+
+ # build blocks
+ self.blocks = nn.ModuleList(
+ [
+ SwinTransformerBlock(
+ dim=dim,
+ num_heads=num_heads,
+ window_size=window_size,
+ shift_size=0 if (i % 2 == 0) else window_size // 2,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop=drop,
+ attn_drop=attn_drop,
+ drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
+ norm_layer=norm_layer,
+ )
+ for i in range(depth)
+ ]
+ )
+
+ # patch merging layer
+ if downsample is not None:
+ self.downsample = downsample(dim=dim, norm_layer=norm_layer)
+ else:
+ self.downsample = None
+
+ def forward(self, x, H, W):
+ """Forward function.
+ Args:
+ x: Input feature, tensor size (B, H*W, C).
+ H, W: Spatial resolution of the input feature.
+ """
+
+ # calculate attention mask for SW-MSA
+ Hp = int(np.ceil(H / self.window_size)) * self.window_size
+ Wp = int(np.ceil(W / self.window_size)) * self.window_size
+ img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
+ h_slices = (
+ slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None),
+ )
+ w_slices = (
+ slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None),
+ )
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(
+ img_mask, self.window_size
+ ) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(
+ attn_mask == 0, float(0.0)
+ )
+
+ for blk in self.blocks:
+ blk.H, blk.W = H, W
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, attn_mask)
+ else:
+ x = blk(x, attn_mask)
+ if self.downsample is not None:
+ x_down = self.downsample(x, H, W)
+ Wh, Ww = (H + 1) // 2, (W + 1) // 2
+ return x, H, W, x_down, Wh, Ww
+ else:
+ return x, H, W, x, H, W
+
+
+class PatchEmbed(nn.Module):
+ """Image to Patch Embedding
+ Args:
+ patch_size (int): Patch token size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ norm_layer (nn.Module, optional): Normalization layer. Default: None
+ """
+
+ def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
+ super().__init__()
+ patch_size = to_2tuple(patch_size)
+ self.patch_size = patch_size
+
+ self.in_chans = in_chans
+ self.embed_dim = embed_dim
+
+ self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
+ if norm_layer is not None:
+ self.norm = norm_layer(embed_dim)
+ else:
+ self.norm = None
+
+ def forward(self, x):
+ """Forward function."""
+ # padding
+ _, _, H, W = x.size()
+ if W % self.patch_size[1] != 0:
+ x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
+ if H % self.patch_size[0] != 0:
+ x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
+
+ x = self.proj(x) # B C Wh Ww
+ if self.norm is not None:
+ Wh, Ww = x.size(2), x.size(3)
+ x = x.flatten(2).transpose(1, 2)
+ x = self.norm(x)
+ x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
+
+ return x
+
+
+class SwinTransformer(nn.Module):
+ """Swin Transformer backbone.
+ A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
+ https://arxiv.org/pdf/2103.14030
+ Args:
+ pretrain_img_size (int): Input image size for training the pretrained model,
+ used in absolute postion embedding. Default 224.
+ patch_size (int | tuple(int)): Patch size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ depths (tuple[int]): Depths of each Swin Transformer stage.
+ num_heads (tuple[int]): Number of attention head of each stage.
+ window_size (int): Window size. Default: 7.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
+ qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
+ drop_rate (float): Dropout rate.
+ attn_drop_rate (float): Attention dropout rate. Default: 0.
+ drop_path_rate (float): Stochastic depth rate. Default: 0.2.
+ norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
+ ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
+ patch_norm (bool): If True, add normalization after patch embedding. Default: True.
+ out_indices (Sequence[int]): Output from which stages.
+ frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
+ -1 means not freezing any parameters.
+ use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
+ """
+
+ def __init__(
+ self,
+ pretrain_img_size=224,
+ patch_size=4,
+ in_chans=3,
+ embed_dim=96,
+ depths=[2, 2, 6, 2],
+ num_heads=[3, 6, 12, 24],
+ window_size=7,
+ mlp_ratio=4.0,
+ qkv_bias=True,
+ qk_scale=None,
+ drop_rate=0.0,
+ attn_drop_rate=0.0,
+ drop_path_rate=0.2,
+ norm_layer=nn.LayerNorm,
+ ape=False,
+ patch_norm=True,
+ out_indices=(0, 1, 2, 3),
+ frozen_stages=-1,
+ use_checkpoint=False,
+ ):
+ super().__init__()
+
+ self.pretrain_img_size = pretrain_img_size
+ self.num_layers = len(depths)
+ self.embed_dim = embed_dim
+ self.ape = ape
+ self.patch_norm = patch_norm
+ self.out_indices = out_indices
+ self.frozen_stages = frozen_stages
+
+ # split image into non-overlapping patches
+ self.patch_embed = PatchEmbed(
+ patch_size=patch_size,
+ in_chans=in_chans,
+ embed_dim=embed_dim,
+ norm_layer=norm_layer if self.patch_norm else None,
+ )
+
+ # absolute position embedding
+ if self.ape:
+ pretrain_img_size = to_2tuple(pretrain_img_size)
+ patch_size = to_2tuple(patch_size)
+ patches_resolution = [
+ pretrain_img_size[0] // patch_size[0],
+ pretrain_img_size[1] // patch_size[1],
+ ]
+
+ self.absolute_pos_embed = nn.Parameter(
+ torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])
+ )
+ trunc_normal_(self.absolute_pos_embed, std=0.02)
+
+ self.pos_drop = nn.Dropout(p=drop_rate)
+
+ # stochastic depth
+ dpr = [
+ x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))
+ ] # stochastic depth decay rule
+
+ # build layers
+ self.layers = nn.ModuleList()
+ for i_layer in range(self.num_layers):
+ layer = BasicLayer(
+ dim=int(embed_dim * 2 ** i_layer),
+ depth=depths[i_layer],
+ num_heads=num_heads[i_layer],
+ window_size=window_size,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias,
+ qk_scale=qk_scale,
+ drop=drop_rate,
+ attn_drop=attn_drop_rate,
+ drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])],
+ norm_layer=norm_layer,
+ downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
+ use_checkpoint=use_checkpoint,
+ )
+ self.layers.append(layer)
+
+ num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
+ self.num_features = num_features
+
+ # add a norm layer for each output
+ for i_layer in out_indices:
+ layer = norm_layer(num_features[i_layer])
+ layer_name = f"norm{i_layer}"
+ self.add_module(layer_name, layer)
+
+ self._freeze_stages()
+
+ def _freeze_stages(self):
+ if self.frozen_stages >= 0:
+ self.patch_embed.eval()
+ for param in self.patch_embed.parameters():
+ param.requires_grad = False
+
+ if self.frozen_stages >= 1 and self.ape:
+ self.absolute_pos_embed.requires_grad = False
+
+ if self.frozen_stages >= 2:
+ self.pos_drop.eval()
+ for i in range(0, self.frozen_stages - 1):
+ m = self.layers[i]
+ m.eval()
+ for param in m.parameters():
+ param.requires_grad = False
+
+ def init_weights(self, pretrained=None):
+ """Initialize the weights in backbone.
+ Args:
+ pretrained (str, optional): Path to pre-trained weights.
+ Defaults to None.
+ """
+
+ def _init_weights(m):
+ if isinstance(m, nn.Linear):
+ trunc_normal_(m.weight, std=0.02)
+ if isinstance(m, nn.Linear) and m.bias is not None:
+ nn.init.constant_(m.bias, 0)
+ elif isinstance(m, nn.LayerNorm):
+ nn.init.constant_(m.bias, 0)
+ nn.init.constant_(m.weight, 1.0)
+
+ def forward(self, x):
+ """Forward function."""
+ x = self.patch_embed(x)
+
+ Wh, Ww = x.size(2), x.size(3)
+ if self.ape:
+ # interpolate the position embedding to the corresponding size
+ absolute_pos_embed = F.interpolate(
+ self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic"
+ )
+ x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
+ else:
+ x = x.flatten(2).transpose(1, 2)
+ x = self.pos_drop(x)
+
+ outs = {}
+ for i in range(self.num_layers):
+ layer = self.layers[i]
+ x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
+
+ if i in self.out_indices:
+ norm_layer = getattr(self, f"norm{i}")
+ x_out = norm_layer(x_out)
+
+ out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
+ outs["res{}".format(i + 2)] = out
+
+ return outs
+
+ def train(self, mode=True):
+ """Convert the model into training mode while keep layers freezed."""
+ super(SwinTransformer, self).train(mode)
+ self._freeze_stages()
+
+
+@BACKBONE_REGISTRY.register()
+class D2SwinTransformer(SwinTransformer, Backbone):
+ def __init__(self, cfg, input_shape):
+
+ pretrain_img_size = cfg.MODEL.SWIN.PRETRAIN_IMG_SIZE
+ patch_size = cfg.MODEL.SWIN.PATCH_SIZE
+ in_chans = 3
+ embed_dim = cfg.MODEL.SWIN.EMBED_DIM
+ depths = cfg.MODEL.SWIN.DEPTHS
+ num_heads = cfg.MODEL.SWIN.NUM_HEADS
+ window_size = cfg.MODEL.SWIN.WINDOW_SIZE
+ mlp_ratio = cfg.MODEL.SWIN.MLP_RATIO
+ qkv_bias = cfg.MODEL.SWIN.QKV_BIAS
+ qk_scale = cfg.MODEL.SWIN.QK_SCALE
+ drop_rate = cfg.MODEL.SWIN.DROP_RATE
+ attn_drop_rate = cfg.MODEL.SWIN.ATTN_DROP_RATE
+ drop_path_rate = cfg.MODEL.SWIN.DROP_PATH_RATE
+ norm_layer = nn.LayerNorm
+ ape = cfg.MODEL.SWIN.APE
+ patch_norm = cfg.MODEL.SWIN.PATCH_NORM
+ use_checkpoint = cfg.MODEL.SWIN.USE_CHECKPOINT
+
+ super().__init__(
+ pretrain_img_size,
+ patch_size,
+ in_chans,
+ embed_dim,
+ depths,
+ num_heads,
+ window_size,
+ mlp_ratio,
+ qkv_bias,
+ qk_scale,
+ drop_rate,
+ attn_drop_rate,
+ drop_path_rate,
+ norm_layer,
+ ape,
+ patch_norm,
+ use_checkpoint=use_checkpoint,
+ )
+
+ self._out_features = cfg.MODEL.SWIN.OUT_FEATURES
+
+ self._out_feature_strides = {
+ "res2": 4,
+ "res3": 8,
+ "res4": 16,
+ "res5": 32,
+ }
+ self._out_feature_channels = {
+ "res2": self.num_features[0],
+ "res3": self.num_features[1],
+ "res4": self.num_features[2],
+ "res5": self.num_features[3],
+ }
+
+ def forward(self, x):
+ """
+ Args:
+ x: Tensor of shape (N,C,H,W). H, W must be a multiple of ``self.size_divisibility``.
+ Returns:
+ dict[str->Tensor]: names and the corresponding features
+ """
+ assert (
+ x.dim() == 4
+ ), f"SwinTransformer takes an input of shape (N, C, H, W). Got {x.shape} instead!"
+ outputs = {}
+ y = super().forward(x)
+ for k in y.keys():
+ if k in self._out_features:
+ outputs[k] = y[k]
+ return outputs
+
+ def output_shape(self):
+ return {
+ name: ShapeSpec(
+ channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
+ )
+ for name in self._out_features
+ }
+
+ @property
+ def size_divisibility(self):
+ return 32
diff --git a/modeling_cuda/pixel_decoder/__init__.py b/modeling_cuda/pixel_decoder/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..9020c2df23e2af280b7bb168b996ae9eaf312eb8
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/__init__.py
@@ -0,0 +1 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
diff --git a/modeling_cuda/pixel_decoder/__pycache__/__init__.cpython-37.pyc b/modeling_cuda/pixel_decoder/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..4abc057c66f9a837cca52ec6e7266993ca23b39e
Binary files /dev/null and b/modeling_cuda/pixel_decoder/__pycache__/__init__.cpython-37.pyc differ
diff --git a/modeling_cuda/pixel_decoder/__pycache__/msdeformattn.cpython-37.pyc b/modeling_cuda/pixel_decoder/__pycache__/msdeformattn.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..60ff14d12b5856e49e5edf0aa9546d1d0e0e417a
Binary files /dev/null and b/modeling_cuda/pixel_decoder/__pycache__/msdeformattn.cpython-37.pyc differ
diff --git a/modeling_cuda/pixel_decoder/msdeformattn.py b/modeling_cuda/pixel_decoder/msdeformattn.py
new file mode 100644
index 0000000000000000000000000000000000000000..80008afdf93cb6188d5406da5d195683a4a1c668
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/msdeformattn.py
@@ -0,0 +1,311 @@
+#!/usr/bin/env python
+# -*- encoding: utf-8 -*-
+'''
+@File : msdeformattn.py
+@Time : 2022/10/02 16:51:09
+@Author : BQH
+@Version : 1.0
+@Contact : raogx.vip@hotmail.com
+@License : (C)Copyright 2017-2018, Liugroup-NLPR-CASIA
+@Desc : 修改自Mask2former,移除detectron2依赖
+'''
+
+# here put the import lib
+
+import numpy as np
+import fvcore.nn.weight_init as weight_init
+import torch
+from torch import nn
+from torch.nn import functional as F
+
+
+from ..transformer_decoder.position_encoding import PositionEmbeddingSine
+from ..transformer_decoder.transformer import _get_clones, _get_activation_fn
+from .ops.modules import MSDeformAttn
+
+# MSDeformAttn Transformer encoder in deformable detr
+class MSDeformAttnTransformerEncoderLayer(nn.Module):
+ def __init__(self,
+ d_model=256, d_ffn=1024,
+ dropout=0.1, activation="relu",
+ n_levels=4, n_heads=8, n_points=4):
+ super().__init__()
+
+ # self attention
+ self.self_attn = MSDeformAttn(d_model, n_levels, n_heads, n_points)
+ self.dropout1 = nn.Dropout(dropout)
+ self.norm1 = nn.LayerNorm(d_model)
+
+ # ffn
+ self.linear1 = nn.Linear(d_model, d_ffn)
+ self.activation = _get_activation_fn(activation)
+ self.dropout2 = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(d_ffn, d_model)
+ self.dropout3 = nn.Dropout(dropout)
+ self.norm2 = nn.LayerNorm(d_model)
+
+ @staticmethod
+ def with_pos_embed(tensor, pos):
+ return tensor if pos is None else tensor + pos
+
+ def forward_ffn(self, src):
+ src2 = self.linear2(self.dropout2(self.activation(self.linear1(src))))
+ src = src + self.dropout3(src2)
+ src = self.norm2(src)
+ return src
+
+ def forward(self, src, pos, reference_points, spatial_shapes, level_start_index, padding_mask=None):
+ # self attention
+ src2 = self.self_attn(self.with_pos_embed(src, pos), reference_points, src, spatial_shapes, level_start_index, padding_mask)
+ src = src + self.dropout1(src2)
+ src = self.norm1(src)
+
+ # ffn
+ src = self.forward_ffn(src)
+
+ return src
+
+
+class MSDeformAttnTransformerEncoder(nn.Module):
+ def __init__(self, encoder_layer, num_layers):
+ super().__init__()
+ self.layers = _get_clones(encoder_layer, num_layers)
+ self.num_layers = num_layers
+
+ @staticmethod
+ def get_reference_points(spatial_shapes, valid_ratios, device):
+ reference_points_list = []
+ for lvl, (H_, W_) in enumerate(spatial_shapes):
+ ref_y, ref_x = torch.meshgrid(torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device),
+ torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device))
+ ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_)
+ ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_)
+ ref = torch.stack((ref_x, ref_y), -1) # [1, H_ * W_, 2]
+ reference_points_list.append(ref)
+ reference_points = torch.cat(reference_points_list, 1)
+ reference_points = reference_points[:, :, None] * valid_ratios[:, None]
+ return reference_points
+
+ def forward(self, src, spatial_shapes, level_start_index, valid_ratios, pos=None, padding_mask=None):
+ output = src
+ reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=src.device)
+ for _, layer in enumerate(self.layers):
+ output = layer(output, pos, reference_points, spatial_shapes, level_start_index, padding_mask)
+
+ return output
+
+
+class MSDeformAttnTransformerEncoderOnly(nn.Module):
+ def __init__(self, d_model=256, nhead=8,
+ num_encoder_layers=6, dim_feedforward=1024, dropout=0.1,
+ activation="relu",
+ num_feature_levels=4, enc_n_points=4,
+ ):
+ super().__init__()
+
+ self.d_model = d_model
+ self.nhead = nhead
+
+ encoder_layer = MSDeformAttnTransformerEncoderLayer(d_model, dim_feedforward,
+ dropout, activation,
+ num_feature_levels, nhead, enc_n_points)
+ self.encoder = MSDeformAttnTransformerEncoder(encoder_layer, num_encoder_layers)
+
+ self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model))
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ nn.init.xavier_uniform_(p)
+ for m in self.modules():
+ if isinstance(m, MSDeformAttn):
+ m._reset_parameters()
+ nn.init.normal_(self.level_embed)
+
+ def get_valid_ratio(self, mask):
+ _, H, W = mask.shape
+ valid_H = torch.sum(~mask[:, :, 0], 1)
+ valid_W = torch.sum(~mask[:, 0, :], 1)
+ valid_ratio_h = valid_H.float() / H
+ valid_ratio_w = valid_W.float() / W
+ valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1)
+ return valid_ratio
+
+ def forward(self, srcs, pos_embeds):
+ masks = [torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool) for x in srcs]
+ # prepare input for encoder
+ src_flatten = []
+ mask_flatten = []
+ lvl_pos_embed_flatten = []
+ spatial_shapes = []
+ for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)):
+ bs, c, h, w = src.shape
+ spatial_shape = (h, w)
+ spatial_shapes.append(spatial_shape)
+ src = src.flatten(2).transpose(1, 2)
+ mask = mask.flatten(1)
+ pos_embed = pos_embed.flatten(2).transpose(1, 2)
+ lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1)
+ lvl_pos_embed_flatten.append(lvl_pos_embed)
+ src_flatten.append(src)
+ mask_flatten.append(mask)
+ src_flatten = torch.cat(src_flatten, 1)
+ mask_flatten = torch.cat(mask_flatten, 1)
+ lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1)
+ spatial_shapes = torch.as_tensor(spatial_shapes, dtype=torch.long, device=src_flatten.device)
+ level_start_index = torch.cat((spatial_shapes.new_zeros((1, )), spatial_shapes.prod(1).cumsum(0)[:-1]))
+ valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1)
+
+ # encoder
+ memory = self.encoder(src_flatten, spatial_shapes, level_start_index, valid_ratios, lvl_pos_embed_flatten, mask_flatten)
+
+ return memory, spatial_shapes, level_start_index
+
+class MSDeformAttnPixelDecoder(nn.Module):
+ def __init__(
+ self,
+ input_shape,
+ transformer_dropout=0.1,
+ transformer_nheads=8,
+ transformer_dim_feedforward=2048,
+ transformer_enc_layers=6,
+ conv_dim=256,
+ mask_dim=256,
+
+ # deformable transformer encoder args
+ transformer_in_features= ["res3", "res4", "res5"],
+ common_stride=4,
+ ):
+ super().__init__()
+ # backbone中["res3", "res4", "res5"]特征层的(channel, stride), eg. [(32,4), (64, 8),(128, 16),(256, 32)]
+ transformer_input_shape = {k: v for k, v in input_shape.items() if k in transformer_in_features}
+
+ # this is the input shape of pixel decoder
+ self.in_features = [k for k, v in input_shape.items()] # starting from "res3" to "res5"
+ self.feature_channels = [v.channel for k, v in input_shape.items()] # eg. [16, 64, 128, 256]
+
+ # this is the input shape of transformer encoder (could use less features than pixel decoder
+ self.transformer_in_features = [k for k, v in transformer_input_shape.items()] # starting from "res3" to "res5"
+ transformer_in_channels = [v.channel for k, v in transformer_input_shape.items()] # eg. [64, 128, 256]
+ self.transformer_feature_strides = [v.stride for k, v in transformer_input_shape.items()] # to decide extra FPN layers
+
+ self.transformer_num_feature_levels = len(self.transformer_in_features)
+ if self.transformer_num_feature_levels > 1:
+ input_proj_list = []
+ # from low resolution to high resolution (res5 -> res3)
+ for in_channels in transformer_in_channels[::-1]:
+ input_proj_list.append(nn.Sequential(
+ nn.Conv2d(in_channels, conv_dim, kernel_size=1),
+ nn.GroupNorm(32, conv_dim),
+ ))
+ self.input_proj = nn.ModuleList(input_proj_list)
+ else:
+ self.input_proj = nn.ModuleList([
+ nn.Sequential(
+ nn.Conv2d(transformer_in_channels[-1], conv_dim, kernel_size=1),
+ nn.GroupNorm(32, conv_dim),
+ )])
+
+ for proj in self.input_proj:
+ nn.init.xavier_uniform_(proj[0].weight, gain=1)
+ nn.init.constant_(proj[0].bias, 0)
+
+ self.transformer = MSDeformAttnTransformerEncoderOnly(
+ d_model=conv_dim,
+ dropout=transformer_dropout,
+ nhead=transformer_nheads,
+ dim_feedforward=transformer_dim_feedforward,
+ num_encoder_layers=transformer_enc_layers,
+ num_feature_levels=self.transformer_num_feature_levels,
+ )
+ N_steps = conv_dim // 2
+ self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
+
+ self.mask_dim = mask_dim
+ # use 1x1 conv instead
+ self.mask_features = nn.Conv2d(
+ conv_dim,
+ mask_dim,
+ kernel_size=1,
+ stride=1,
+ padding=0,
+ )
+ weight_init.c2_xavier_fill(self.mask_features)
+
+ self.maskformer_num_feature_levels = 3 # always use 3 scales
+ self.common_stride = common_stride
+
+ # extra fpn levels
+ stride = min(self.transformer_feature_strides)
+ self.num_fpn_levels = int(np.log2(stride) - np.log2(self.common_stride))
+
+ lateral_convs = []
+ output_convs = []
+
+ for idx, in_channels in enumerate(self.feature_channels[:self.num_fpn_levels]): # res2 -> fpn
+ lateral_conv = nn.Sequential(nn.Conv2d(in_channels, conv_dim, kernel_size=1),
+ nn.GroupNorm(32, conv_dim),
+ nn.ReLU(inplace=True))
+
+ output_conv = nn.Sequential(nn.Conv2d(conv_dim, conv_dim, kernel_size=3, stride=1, padding=1),
+ nn.GroupNorm(32, conv_dim),
+ nn.ReLU(inplace=True))
+
+ weight_init.c2_xavier_fill(lateral_conv[0])
+ weight_init.c2_xavier_fill(output_conv[0])
+ self.add_module("adapter_{}".format(idx + 1), lateral_conv)
+ self.add_module("layer_{}".format(idx + 1), output_conv)
+
+ lateral_convs.append(lateral_conv)
+ output_convs.append(output_conv)
+ # Place convs into top-down order (from low to high resolution)
+ # to make the top-down computation in forward clearer.
+ self.lateral_convs = lateral_convs[::-1]
+ self.output_convs = output_convs[::-1]
+
+ def forward_features(self, features):
+ srcs = []
+ pos = []
+ # Reverse feature maps into top-down order (from low to high resolution), 'res5' -> 'res3'
+ for idx, f in enumerate(self.transformer_in_features[::-1]):
+ x = features[f].float() # deformable detr does not support half precision
+ srcs.append(self.input_proj[idx](x))
+ pos.append(self.pe_layer(x))
+
+ y, spatial_shapes, level_start_index = self.transformer(srcs, pos)
+ bs = y.shape[0]
+
+ split_size_or_sections = [None] * self.transformer_num_feature_levels
+ for i in range(self.transformer_num_feature_levels):
+ if i < self.transformer_num_feature_levels - 1:
+ split_size_or_sections[i] = level_start_index[i + 1] - level_start_index[i]
+ else:
+ split_size_or_sections[i] = y.shape[1] - level_start_index[i]
+ y = torch.split(y, split_size_or_sections, dim=1)
+
+ out = []
+ multi_scale_features = []
+ num_cur_levels = 0
+ for i, z in enumerate(y):
+ out.append(z.transpose(1, 2).view(bs, -1, spatial_shapes[i][0], spatial_shapes[i][1]))
+
+ # append `out` with extra FPN levels
+ # Reverse feature maps into top-down order (from low to high resolution)
+ for idx, f in enumerate(self.in_features[:self.num_fpn_levels][::-1]):
+ x = features[f].float()
+ lateral_conv = self.lateral_convs[idx]
+ output_conv = self.output_convs[idx]
+ cur_fpn = lateral_conv(x)
+ # Following FPN implementation, we use nearest upsampling here
+ y = cur_fpn + F.interpolate(out[-1], size=cur_fpn.shape[-2:], mode="bilinear", align_corners=False)
+ y = output_conv(y)
+ out.append(y)
+
+ for o in out:
+ if num_cur_levels < self.maskformer_num_feature_levels:
+ multi_scale_features.append(o)
+ num_cur_levels += 1
+
+ return self.mask_features(out[-1]), out[0], multi_scale_features
diff --git a/modeling_cuda/pixel_decoder/ops/functions/.ms_deform_attn_func.py.swp b/modeling_cuda/pixel_decoder/ops/functions/.ms_deform_attn_func.py.swp
new file mode 100644
index 0000000000000000000000000000000000000000..caa0cdde96aa476140cf75efe12a691a60e8471d
Binary files /dev/null and b/modeling_cuda/pixel_decoder/ops/functions/.ms_deform_attn_func.py.swp differ
diff --git a/modeling_cuda/pixel_decoder/ops/functions/__init__.py b/modeling_cuda/pixel_decoder/ops/functions/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..2b06b5ac538b63bdb9a6c82e4635b95bb5491d5b
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/functions/__init__.py
@@ -0,0 +1,13 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+from .ms_deform_attn_func import MSDeformAttnFunction
+
diff --git a/modeling_cuda/pixel_decoder/ops/functions/__pycache__/__init__.cpython-37.pyc b/modeling_cuda/pixel_decoder/ops/functions/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..10b019d4d2ebdae41c029888ffae379ac578d87f
Binary files /dev/null and b/modeling_cuda/pixel_decoder/ops/functions/__pycache__/__init__.cpython-37.pyc differ
diff --git a/modeling_cuda/pixel_decoder/ops/functions/__pycache__/ms_deform_attn_func.cpython-37.pyc b/modeling_cuda/pixel_decoder/ops/functions/__pycache__/ms_deform_attn_func.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..71bc0332f97d2e52391e31e7b4a94985a484bb12
Binary files /dev/null and b/modeling_cuda/pixel_decoder/ops/functions/__pycache__/ms_deform_attn_func.cpython-37.pyc differ
diff --git a/modeling_cuda/pixel_decoder/ops/functions/ms_deform_attn_func.py b/modeling_cuda/pixel_decoder/ops/functions/ms_deform_attn_func.py
new file mode 100644
index 0000000000000000000000000000000000000000..bcaa591d12545225bf200146ae8fbc7534af35e0
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/functions/ms_deform_attn_func.py
@@ -0,0 +1,77 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+from __future__ import absolute_import
+from __future__ import print_function
+from __future__ import division
+
+import torch
+import torch.nn.functional as F
+from torch.autograd import Function
+from torch.autograd.function import once_differentiable
+
+try:
+ import MultiScaleDeformableAttention as MSDA
+except ModuleNotFoundError as e:
+ info_string = (
+ "\n\nPlease compile MultiScaleDeformableAttention CUDA op with the following commands:\n"
+ "\t`cd mask2former/modeling/pixel_decoder/ops`\n"
+ "\t`sh make.sh`\n"
+ )
+ raise ModuleNotFoundError(info_string)
+
+
+class MSDeformAttnFunction(Function):
+ @staticmethod
+ def forward(ctx, value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, im2col_step):
+ ctx.im2col_step = im2col_step
+ output = MSDA.ms_deform_attn_forward(
+ value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, ctx.im2col_step)
+ ctx.save_for_backward(value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights)
+ return output
+
+ @staticmethod
+ @once_differentiable
+ def backward(ctx, grad_output):
+ value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights = ctx.saved_tensors
+ grad_value, grad_sampling_loc, grad_attn_weight = \
+ MSDA.ms_deform_attn_backward(
+ value, value_spatial_shapes, value_level_start_index, sampling_locations, attention_weights, grad_output, ctx.im2col_step)
+
+ return grad_value, None, None, grad_sampling_loc, grad_attn_weight, None
+
+
+def ms_deform_attn_core_pytorch(value, value_spatial_shapes, sampling_locations, attention_weights):
+ """
+ @value: bs, sum(h, w), num_head, dim
+ @sampling_locations: bs, sum(h, w), num_head, num_layer, 4, 2
+ @attention_weights: bs, sum(h, w), num_head, num_layer, 4
+ """
+ N_, S_, M_, Dim = value.shape
+ _, Lq_, M_, L_, P_, _ = sampling_locations.shape
+ value_list = value.split([H_ * W_ for H_, W_ in value_spatial_shapes], dim=1)
+ sampling_grids = 2 * sampling_locations - 1 # 把范围从[0,1]转换到[-1,1], F.grid_sample要求grid的范围是[-1,1]
+ sampling_value_list = []
+ for lid_, (H_, W_) in enumerate(value_spatial_shapes):
+ # N_, H_*W_, M_, D_ -> N_, H_*W_, M_*D_ -> N_, M_*D_, H_*W_ -> N_*M_, H_, W_
+ value_l_ = value_list[lid_].flatten(2).transpose(1, 2).reshape(N_*M_, Dim, H_, W_) # eg. [bs * 8, 32, 28, 28, 28]
+ # N_, Lq_, M_, P_, 3 -> N_, M_, Lq_, P_, 3 -> N_*M_, Lq_, P_, 3
+ sampling_grid_l_ = sampling_grids[:, :, :, lid_]
+ sampling_grid_l_ = sampling_grid_l_.transpose(1, 2).flatten(0, 1) # eg. [bs * 8, 1045, 3, 3]
+ # N_*M_, D_, Lq_, P_
+ sampling_value_l_ = F.grid_sample(value_l_, sampling_grid_l_, mode='bilinear', padding_mode='zeros', align_corners=False) # eg. [bs * 8, 32, 1045, 4]
+ sampling_value_list.append(sampling_value_l_)
+
+ # (N_, Lq_, M_, L_, P_) -> (N_, M_, Lq_, L_, P_) -> (N_, M_, 1, Lq_, L_*P_)
+ attention_weights = attention_weights.transpose(1, 2).reshape(N_*M_, 1, Lq_, L_*P_) # eg. [bs * 8, 1, 1045, 4 * 4], 4个特征层 * 4个采样点
+ # torch.stack(sampling_value_list, dim=-2): [bs * 8, 32, 1045, 4, num_layer] -> [bs * 8, 32, 1045, 4 * 4], 4个特征层 * 4个采样点
+ output = (torch.stack(sampling_value_list, dim=-2).squeeze(2).flatten(-2) * attention_weights).sum(-1).view(N_, M_*Dim, Lq_)
+ return output.transpose(1, 2).contiguous()
diff --git a/modeling_cuda/pixel_decoder/ops/make.sh b/modeling_cuda/pixel_decoder/ops/make.sh
new file mode 100644
index 0000000000000000000000000000000000000000..7b38cdbf48f3571d986a33e7563b517952b51bb2
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/make.sh
@@ -0,0 +1,13 @@
+#!/usr/bin/env bash
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+python setup.py build install
diff --git a/modeling_cuda/pixel_decoder/ops/modules/__init__.py b/modeling_cuda/pixel_decoder/ops/modules/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..6fdbf03359958f3d67ab00f879bf6b61a6c8f06a
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/modules/__init__.py
@@ -0,0 +1,12 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+from .ms_deform_attn import MSDeformAttn
diff --git a/modeling_cuda/pixel_decoder/ops/modules/__pycache__/__init__.cpython-310.pyc b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/__init__.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..3bb652cb561f53fd9acfe2836175b52f5de7635b
Binary files /dev/null and b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/__init__.cpython-310.pyc differ
diff --git a/modeling_cuda/pixel_decoder/ops/modules/__pycache__/__init__.cpython-37.pyc b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e7351207c99579135df9023f9404bd634db8f17a
Binary files /dev/null and b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/__init__.cpython-37.pyc differ
diff --git a/modeling_cuda/pixel_decoder/ops/modules/__pycache__/__init__.cpython-38.pyc b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/__init__.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e1dd239d89b581f13e9582217fb4ddd8f232a30a
Binary files /dev/null and b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/__init__.cpython-38.pyc differ
diff --git a/modeling_cuda/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-310.pyc b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-310.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..16c420d9adcc2ed93a2d1b1dc54708fa4baf7bba
Binary files /dev/null and b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-310.pyc differ
diff --git a/modeling_cuda/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-37.pyc b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..3f958ff32207e4417715539b9f023bbac9c37400
Binary files /dev/null and b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-37.pyc differ
diff --git a/modeling_cuda/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-38.pyc b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..572b3bb83d20af5b7de4201f1153105bf7e7ab23
Binary files /dev/null and b/modeling_cuda/pixel_decoder/ops/modules/__pycache__/ms_deform_attn.cpython-38.pyc differ
diff --git a/modeling_cuda/pixel_decoder/ops/modules/ms_deform_attn.py b/modeling_cuda/pixel_decoder/ops/modules/ms_deform_attn.py
new file mode 100644
index 0000000000000000000000000000000000000000..2cb2956e4554637dba25787652ff1eb3c6d6b953
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/modules/ms_deform_attn.py
@@ -0,0 +1,121 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+from __future__ import absolute_import
+from __future__ import print_function
+from __future__ import division
+
+import warnings
+import math
+
+import torch
+from torch import nn
+import torch.nn.functional as F
+from torch.nn.init import xavier_uniform_, constant_
+
+from ..functions import MSDeformAttnFunction
+from ..functions.ms_deform_attn_func import ms_deform_attn_core_pytorch
+
+
+def _is_power_of_2(n):
+ if (not isinstance(n, int)) or (n < 0):
+ raise ValueError("invalid input for _is_power_of_2: {} (type: {})".format(n, type(n)))
+ return (n & (n-1) == 0) and n != 0
+
+
+class MSDeformAttn(nn.Module):
+ def __init__(self, d_model=256, n_levels=4, n_heads=8, n_points=4):
+ """
+ Multi-Scale Deformable Attention Module
+ :param d_model hidden dimension
+ :param n_levels number of feature levels
+ :param n_heads number of attention heads
+ :param n_points number of sampling points per attention head per feature level
+ """
+ super().__init__()
+ if d_model % n_heads != 0:
+ raise ValueError('d_model must be divisible by n_heads, but got {} and {}'.format(d_model, n_heads))
+ _d_per_head = d_model // n_heads
+ # you'd better set _d_per_head to a power of 2 which is more efficient in our CUDA implementation
+ if not _is_power_of_2(_d_per_head):
+ warnings.warn("You'd better set d_model in MSDeformAttn to make the dimension of each attention head a power of 2 "
+ "which is more efficient in our CUDA implementation.")
+
+ self.im2col_step = 128
+
+ self.d_model = d_model
+ self.n_levels = n_levels
+ self.n_heads = n_heads
+ self.n_points = n_points
+
+ self.sampling_offsets = nn.Linear(d_model, n_heads * n_levels * n_points * 2)
+ self.attention_weights = nn.Linear(d_model, n_heads * n_levels * n_points)
+ self.value_proj = nn.Linear(d_model, d_model)
+ self.output_proj = nn.Linear(d_model, d_model)
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ constant_(self.sampling_offsets.weight.data, 0.)
+ thetas = torch.arange(self.n_heads, dtype=torch.float32) * (2.0 * math.pi / self.n_heads)
+ grid_init = torch.stack([thetas.cos(), thetas.sin()], -1)
+ grid_init = (grid_init / grid_init.abs().max(-1, keepdim=True)[0]).view(self.n_heads, 1, 1, 2).repeat(1, self.n_levels, self.n_points, 1)
+ for i in range(self.n_points):
+ grid_init[:, :, i, :] *= i + 1
+ with torch.no_grad():
+ self.sampling_offsets.bias = nn.Parameter(grid_init.view(-1))
+ constant_(self.attention_weights.weight.data, 0.)
+ constant_(self.attention_weights.bias.data, 0.)
+ xavier_uniform_(self.value_proj.weight.data)
+ constant_(self.value_proj.bias.data, 0.)
+ xavier_uniform_(self.output_proj.weight.data)
+ constant_(self.output_proj.bias.data, 0.)
+
+ def forward(self, query, reference_points, input_flatten, input_spatial_shapes, input_level_start_index, input_padding_mask=None):
+ """
+ :param query (N, Length_{query}, C)
+ :param reference_points (N, Length_{query}, n_levels, 2), range in [0, 1], top-left (0,0), bottom-right (1, 1), including padding area
+ or (N, Length_{query}, n_levels, 4), add additional (w, h) to form reference boxes
+ :param input_flatten (N, \sum_{l=0}^{L-1} H_l \cdot W_l, C)
+ :param input_spatial_shapes (n_levels, 2), [(H_0, W_0), (H_1, W_1), ..., (H_{L-1}, W_{L-1})]
+ :param input_level_start_index (n_levels, ), [0, H_0*W_0, H_0*W_0+H_1*W_1, H_0*W_0+H_1*W_1+H_2*W_2, ..., H_0*W_0+H_1*W_1+...+H_{L-1}*W_{L-1}]
+ :param input_padding_mask (N, \sum_{l=0}^{L-1} H_l \cdot W_l), True for padding elements, False for non-padding elements
+
+ :return output (N, Length_{query}, C)
+ """
+ N, Len_q, _ = query.shape
+ N, Len_in, _ = input_flatten.shape
+ assert (input_spatial_shapes[:, 0] * input_spatial_shapes[:, 1]).sum() == Len_in
+
+ value = self.value_proj(input_flatten)
+ if input_padding_mask is not None:
+ value = value.masked_fill(input_padding_mask[..., None], float(0))
+ value = value.view(N, Len_in, self.n_heads, self.d_model // self.n_heads)
+ sampling_offsets = self.sampling_offsets(query).view(N, Len_q, self.n_heads, self.n_levels, self.n_points, 2)
+ attention_weights = self.attention_weights(query).view(N, Len_q, self.n_heads, self.n_levels * self.n_points)
+ attention_weights = F.softmax(attention_weights, -1).view(N, Len_q, self.n_heads, self.n_levels, self.n_points)
+
+ # N, Len_q, n_heads, n_levels, n_points, 3
+ # input_spatial_shapes是以(h, w)格式存储,reference_points中以(x, y)格式存储,所以需要调换input_spatial_shapes中(h,w)的顺序
+ offset_normalizer = torch.stack([input_spatial_shapes[..., 1], input_spatial_shapes[..., 0]], -1)
+ sampling_locations = reference_points[:, :, None, :, None, :] + sampling_offsets / offset_normalizer[None, None, None, :, None, :]
+
+ output = MSDeformAttnFunction.apply(value, input_spatial_shapes, input_level_start_index, sampling_locations, attention_weights, self.im2col_step)
+ # try:
+ # output = MSDeformAttnFunction.apply(value, input_spatial_shapes, input_level_start_index, sampling_locations, attention_weights, self.im2col_step)
+ # except:
+ # CPU, for debug or test only
+ # output = ms_deform_attn_core_pytorch(value, input_spatial_shapes, sampling_locations, attention_weights)
+
+ ## For FLOPs calculation only
+ #output = ms_deform_attn_core_pytorch(value, input_spatial_shapes, sampling_locations, attention_weights)
+ output = self.output_proj(output)
+ return output
diff --git a/modeling_cuda/pixel_decoder/ops/setup.py b/modeling_cuda/pixel_decoder/ops/setup.py
new file mode 100644
index 0000000000000000000000000000000000000000..3b57ad313ac8f9b6586892142da8ba943e516cec
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/setup.py
@@ -0,0 +1,78 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+import os
+import glob
+
+import torch
+
+from torch.utils.cpp_extension import CUDA_HOME
+from torch.utils.cpp_extension import CppExtension
+from torch.utils.cpp_extension import CUDAExtension
+
+from setuptools import find_packages
+from setuptools import setup
+
+requirements = ["torch", "torchvision"]
+
+def get_extensions():
+ this_dir = os.path.dirname(os.path.abspath(__file__))
+ extensions_dir = os.path.join(this_dir, "src")
+
+ main_file = glob.glob(os.path.join(extensions_dir, "*.cpp"))
+ source_cpu = glob.glob(os.path.join(extensions_dir, "cpu", "*.cpp"))
+ source_cuda = glob.glob(os.path.join(extensions_dir, "cuda", "*.cu"))
+
+ sources = main_file + source_cpu
+ extension = CppExtension
+ extra_compile_args = {"cxx": []}
+ define_macros = []
+
+ # Force cuda since torch ask for a device, not if cuda is in fact available.
+ if (os.environ.get('FORCE_CUDA') or torch.cuda.is_available()) and CUDA_HOME is not None:
+ extension = CUDAExtension
+ sources += source_cuda
+ define_macros += [("WITH_CUDA", None)]
+ extra_compile_args["nvcc"] = [
+ "-DCUDA_HAS_FP16=1",
+ "-D__CUDA_NO_HALF_OPERATORS__",
+ "-D__CUDA_NO_HALF_CONVERSIONS__",
+ "-D__CUDA_NO_HALF2_OPERATORS__",
+ ]
+ else:
+ if CUDA_HOME is None:
+ raise NotImplementedError('CUDA_HOME is None. Please set environment variable CUDA_HOME.')
+ else:
+ raise NotImplementedError('No CUDA runtime is found. Please set FORCE_CUDA=1 or test it by running torch.cuda.is_available().')
+
+ sources = [os.path.join(extensions_dir, s) for s in sources]
+ include_dirs = [extensions_dir]
+ ext_modules = [
+ extension(
+ "MultiScaleDeformableAttention",
+ sources,
+ include_dirs=include_dirs,
+ define_macros=define_macros,
+ extra_compile_args=extra_compile_args,
+ )
+ ]
+ return ext_modules
+
+setup(
+ name="MultiScaleDeformableAttention",
+ version="1.0",
+ author="Weijie Su",
+ url="https://github.com/fundamentalvision/Deformable-DETR",
+ description="PyTorch Wrapper for CUDA Functions of Multi-Scale Deformable Attention",
+ packages=find_packages(exclude=("configs", "tests",)),
+ ext_modules=get_extensions(),
+ cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension},
+)
diff --git a/modeling_cuda/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp b/modeling_cuda/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp
new file mode 100644
index 0000000000000000000000000000000000000000..48757e2b0156b2c1513b615d2a17e5aee5172ae7
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp
@@ -0,0 +1,46 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#include
+
+#include
+#include
+
+
+at::Tensor
+ms_deform_attn_cpu_forward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const int im2col_step)
+{
+ AT_ERROR("Not implement on cpu");
+}
+
+std::vector
+ms_deform_attn_cpu_backward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const at::Tensor &grad_output,
+ const int im2col_step)
+{
+ AT_ERROR("Not implement on cpu");
+}
+
diff --git a/modeling_cuda/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h b/modeling_cuda/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h
new file mode 100644
index 0000000000000000000000000000000000000000..51bb27e9ee828f967e8aa854c2d55574040c6d7e
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h
@@ -0,0 +1,38 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#pragma once
+#include
+
+at::Tensor
+ms_deform_attn_cpu_forward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const int im2col_step);
+
+std::vector
+ms_deform_attn_cpu_backward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const at::Tensor &grad_output,
+ const int im2col_step);
+
+
diff --git a/modeling_cuda/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.cu b/modeling_cuda/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.cu
new file mode 100644
index 0000000000000000000000000000000000000000..0c465dab3d636dfd6a44523c63f148b6e15084d9
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.cu
@@ -0,0 +1,158 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#include
+#include "cuda/ms_deform_im2col_cuda.cuh"
+
+#include
+#include
+#include
+#include
+
+
+at::Tensor ms_deform_attn_cuda_forward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const int im2col_step)
+{
+ AT_ASSERTM(value.is_contiguous(), "value tensor has to be contiguous");
+ AT_ASSERTM(spatial_shapes.is_contiguous(), "spatial_shapes tensor has to be contiguous");
+ AT_ASSERTM(level_start_index.is_contiguous(), "level_start_index tensor has to be contiguous");
+ AT_ASSERTM(sampling_loc.is_contiguous(), "sampling_loc tensor has to be contiguous");
+ AT_ASSERTM(attn_weight.is_contiguous(), "attn_weight tensor has to be contiguous");
+
+ AT_ASSERTM(value.type().is_cuda(), "value must be a CUDA tensor");
+ AT_ASSERTM(spatial_shapes.type().is_cuda(), "spatial_shapes must be a CUDA tensor");
+ AT_ASSERTM(level_start_index.type().is_cuda(), "level_start_index must be a CUDA tensor");
+ AT_ASSERTM(sampling_loc.type().is_cuda(), "sampling_loc must be a CUDA tensor");
+ AT_ASSERTM(attn_weight.type().is_cuda(), "attn_weight must be a CUDA tensor");
+
+ const int batch = value.size(0);
+ const int spatial_size = value.size(1);
+ const int num_heads = value.size(2);
+ const int channels = value.size(3);
+
+ const int num_levels = spatial_shapes.size(0);
+
+ const int num_query = sampling_loc.size(1);
+ const int num_point = sampling_loc.size(4);
+
+ const int im2col_step_ = std::min(batch, im2col_step);
+
+ AT_ASSERTM(batch % im2col_step_ == 0, "batch(%d) must divide im2col_step(%d)", batch, im2col_step_);
+
+ auto output = at::zeros({batch, num_query, num_heads, channels}, value.options());
+
+ const int batch_n = im2col_step_;
+ auto output_n = output.view({batch/im2col_step_, batch_n, num_query, num_heads, channels});
+ auto per_value_size = spatial_size * num_heads * channels;
+ auto per_sample_loc_size = num_query * num_heads * num_levels * num_point * 2;
+ auto per_attn_weight_size = num_query * num_heads * num_levels * num_point;
+ for (int n = 0; n < batch/im2col_step_; ++n)
+ {
+ auto columns = output_n.select(0, n);
+ AT_DISPATCH_FLOATING_TYPES(value.type(), "ms_deform_attn_forward_cuda", ([&] {
+ ms_deformable_im2col_cuda(at::cuda::getCurrentCUDAStream(),
+ value.data() + n * im2col_step_ * per_value_size,
+ spatial_shapes.data(),
+ level_start_index.data(),
+ sampling_loc.data() + n * im2col_step_ * per_sample_loc_size,
+ attn_weight.data() + n * im2col_step_ * per_attn_weight_size,
+ batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point,
+ columns.data());
+
+ }));
+ }
+
+ output = output.view({batch, num_query, num_heads*channels});
+
+ return output;
+}
+
+
+std::vector ms_deform_attn_cuda_backward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const at::Tensor &grad_output,
+ const int im2col_step)
+{
+
+ AT_ASSERTM(value.is_contiguous(), "value tensor has to be contiguous");
+ AT_ASSERTM(spatial_shapes.is_contiguous(), "spatial_shapes tensor has to be contiguous");
+ AT_ASSERTM(level_start_index.is_contiguous(), "level_start_index tensor has to be contiguous");
+ AT_ASSERTM(sampling_loc.is_contiguous(), "sampling_loc tensor has to be contiguous");
+ AT_ASSERTM(attn_weight.is_contiguous(), "attn_weight tensor has to be contiguous");
+ AT_ASSERTM(grad_output.is_contiguous(), "grad_output tensor has to be contiguous");
+
+ AT_ASSERTM(value.type().is_cuda(), "value must be a CUDA tensor");
+ AT_ASSERTM(spatial_shapes.type().is_cuda(), "spatial_shapes must be a CUDA tensor");
+ AT_ASSERTM(level_start_index.type().is_cuda(), "level_start_index must be a CUDA tensor");
+ AT_ASSERTM(sampling_loc.type().is_cuda(), "sampling_loc must be a CUDA tensor");
+ AT_ASSERTM(attn_weight.type().is_cuda(), "attn_weight must be a CUDA tensor");
+ AT_ASSERTM(grad_output.type().is_cuda(), "grad_output must be a CUDA tensor");
+
+ const int batch = value.size(0);
+ const int spatial_size = value.size(1);
+ const int num_heads = value.size(2);
+ const int channels = value.size(3);
+
+ const int num_levels = spatial_shapes.size(0);
+
+ const int num_query = sampling_loc.size(1);
+ const int num_point = sampling_loc.size(4);
+
+ const int im2col_step_ = std::min(batch, im2col_step);
+
+ AT_ASSERTM(batch % im2col_step_ == 0, "batch(%d) must divide im2col_step(%d)", batch, im2col_step_);
+
+ auto grad_value = at::zeros_like(value);
+ auto grad_sampling_loc = at::zeros_like(sampling_loc);
+ auto grad_attn_weight = at::zeros_like(attn_weight);
+
+ const int batch_n = im2col_step_;
+ auto per_value_size = spatial_size * num_heads * channels;
+ auto per_sample_loc_size = num_query * num_heads * num_levels * num_point * 2;
+ auto per_attn_weight_size = num_query * num_heads * num_levels * num_point;
+ auto grad_output_n = grad_output.view({batch/im2col_step_, batch_n, num_query, num_heads, channels});
+
+ for (int n = 0; n < batch/im2col_step_; ++n)
+ {
+ auto grad_output_g = grad_output_n.select(0, n);
+ AT_DISPATCH_FLOATING_TYPES(value.type(), "ms_deform_attn_backward_cuda", ([&] {
+ ms_deformable_col2im_cuda(at::cuda::getCurrentCUDAStream(),
+ grad_output_g.data(),
+ value.data() + n * im2col_step_ * per_value_size,
+ spatial_shapes.data(),
+ level_start_index.data(),
+ sampling_loc.data() + n * im2col_step_ * per_sample_loc_size,
+ attn_weight.data() + n * im2col_step_ * per_attn_weight_size,
+ batch_n, spatial_size, num_heads, channels, num_levels, num_query, num_point,
+ grad_value.data() + n * im2col_step_ * per_value_size,
+ grad_sampling_loc.data() + n * im2col_step_ * per_sample_loc_size,
+ grad_attn_weight.data() + n * im2col_step_ * per_attn_weight_size);
+
+ }));
+ }
+
+ return {
+ grad_value, grad_sampling_loc, grad_attn_weight
+ };
+}
\ No newline at end of file
diff --git a/modeling_cuda/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h b/modeling_cuda/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h
new file mode 100644
index 0000000000000000000000000000000000000000..4f0658e8668a11f0e7d71deff9adac71884f2e87
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/src/cuda/ms_deform_attn_cuda.h
@@ -0,0 +1,35 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#pragma once
+#include
+
+at::Tensor ms_deform_attn_cuda_forward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const int im2col_step);
+
+std::vector ms_deform_attn_cuda_backward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const at::Tensor &grad_output,
+ const int im2col_step);
+
diff --git a/modeling_cuda/pixel_decoder/ops/src/cuda/ms_deform_im2col_cuda.cuh b/modeling_cuda/pixel_decoder/ops/src/cuda/ms_deform_im2col_cuda.cuh
new file mode 100644
index 0000000000000000000000000000000000000000..c04e0d4ab97d25c1756fcd8d08dd1e5a6d280b7c
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/src/cuda/ms_deform_im2col_cuda.cuh
@@ -0,0 +1,1332 @@
+/*!
+**************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************
+* Modified from DCN (https://github.com/msracver/Deformable-ConvNets)
+* Copyright (c) 2018 Microsoft
+**************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#include
+#include
+#include
+
+#include
+#include
+
+#include
+
+#define CUDA_KERNEL_LOOP(i, n) \
+ for (int i = blockIdx.x * blockDim.x + threadIdx.x; \
+ i < (n); \
+ i += blockDim.x * gridDim.x)
+
+const int CUDA_NUM_THREADS = 1024;
+inline int GET_BLOCKS(const int N, const int num_threads)
+{
+ return (N + num_threads - 1) / num_threads;
+}
+
+
+template
+__device__ scalar_t ms_deform_attn_im2col_bilinear(const scalar_t* &bottom_data,
+ const int &height, const int &width, const int &nheads, const int &channels,
+ const scalar_t &h, const scalar_t &w, const int &m, const int &c)
+{
+ const int h_low = floor(h);
+ const int w_low = floor(w);
+ const int h_high = h_low + 1;
+ const int w_high = w_low + 1;
+
+ const scalar_t lh = h - h_low;
+ const scalar_t lw = w - w_low;
+ const scalar_t hh = 1 - lh, hw = 1 - lw;
+
+ const int w_stride = nheads * channels;
+ const int h_stride = width * w_stride;
+ const int h_low_ptr_offset = h_low * h_stride;
+ const int h_high_ptr_offset = h_low_ptr_offset + h_stride;
+ const int w_low_ptr_offset = w_low * w_stride;
+ const int w_high_ptr_offset = w_low_ptr_offset + w_stride;
+ const int base_ptr = m * channels + c;
+
+ scalar_t v1 = 0;
+ if (h_low >= 0 && w_low >= 0)
+ {
+ const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr;
+ v1 = bottom_data[ptr1];
+ }
+ scalar_t v2 = 0;
+ if (h_low >= 0 && w_high <= width - 1)
+ {
+ const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr;
+ v2 = bottom_data[ptr2];
+ }
+ scalar_t v3 = 0;
+ if (h_high <= height - 1 && w_low >= 0)
+ {
+ const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr;
+ v3 = bottom_data[ptr3];
+ }
+ scalar_t v4 = 0;
+ if (h_high <= height - 1 && w_high <= width - 1)
+ {
+ const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr;
+ v4 = bottom_data[ptr4];
+ }
+
+ const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;
+
+ const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);
+ return val;
+}
+
+
+template
+__device__ void ms_deform_attn_col2im_bilinear(const scalar_t* &bottom_data,
+ const int &height, const int &width, const int &nheads, const int &channels,
+ const scalar_t &h, const scalar_t &w, const int &m, const int &c,
+ const scalar_t &top_grad,
+ const scalar_t &attn_weight,
+ scalar_t* &grad_value,
+ scalar_t* grad_sampling_loc,
+ scalar_t* grad_attn_weight)
+{
+ const int h_low = floor(h);
+ const int w_low = floor(w);
+ const int h_high = h_low + 1;
+ const int w_high = w_low + 1;
+
+ const scalar_t lh = h - h_low;
+ const scalar_t lw = w - w_low;
+ const scalar_t hh = 1 - lh, hw = 1 - lw;
+
+ const int w_stride = nheads * channels;
+ const int h_stride = width * w_stride;
+ const int h_low_ptr_offset = h_low * h_stride;
+ const int h_high_ptr_offset = h_low_ptr_offset + h_stride;
+ const int w_low_ptr_offset = w_low * w_stride;
+ const int w_high_ptr_offset = w_low_ptr_offset + w_stride;
+ const int base_ptr = m * channels + c;
+
+ const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;
+ const scalar_t top_grad_value = top_grad * attn_weight;
+ scalar_t grad_h_weight = 0, grad_w_weight = 0;
+
+ scalar_t v1 = 0;
+ if (h_low >= 0 && w_low >= 0)
+ {
+ const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr;
+ v1 = bottom_data[ptr1];
+ grad_h_weight -= hw * v1;
+ grad_w_weight -= hh * v1;
+ atomicAdd(grad_value+ptr1, w1*top_grad_value);
+ }
+ scalar_t v2 = 0;
+ if (h_low >= 0 && w_high <= width - 1)
+ {
+ const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr;
+ v2 = bottom_data[ptr2];
+ grad_h_weight -= lw * v2;
+ grad_w_weight += hh * v2;
+ atomicAdd(grad_value+ptr2, w2*top_grad_value);
+ }
+ scalar_t v3 = 0;
+ if (h_high <= height - 1 && w_low >= 0)
+ {
+ const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr;
+ v3 = bottom_data[ptr3];
+ grad_h_weight += hw * v3;
+ grad_w_weight -= lh * v3;
+ atomicAdd(grad_value+ptr3, w3*top_grad_value);
+ }
+ scalar_t v4 = 0;
+ if (h_high <= height - 1 && w_high <= width - 1)
+ {
+ const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr;
+ v4 = bottom_data[ptr4];
+ grad_h_weight += lw * v4;
+ grad_w_weight += lh * v4;
+ atomicAdd(grad_value+ptr4, w4*top_grad_value);
+ }
+
+ const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);
+ *grad_attn_weight = top_grad * val;
+ *grad_sampling_loc = width * grad_w_weight * top_grad_value;
+ *(grad_sampling_loc + 1) = height * grad_h_weight * top_grad_value;
+}
+
+
+template
+__device__ void ms_deform_attn_col2im_bilinear_gm(const scalar_t* &bottom_data,
+ const int &height, const int &width, const int &nheads, const int &channels,
+ const scalar_t &h, const scalar_t &w, const int &m, const int &c,
+ const scalar_t &top_grad,
+ const scalar_t &attn_weight,
+ scalar_t* &grad_value,
+ scalar_t* grad_sampling_loc,
+ scalar_t* grad_attn_weight)
+{
+ const int h_low = floor(h);
+ const int w_low = floor(w);
+ const int h_high = h_low + 1;
+ const int w_high = w_low + 1;
+
+ const scalar_t lh = h - h_low;
+ const scalar_t lw = w - w_low;
+ const scalar_t hh = 1 - lh, hw = 1 - lw;
+
+ const int w_stride = nheads * channels;
+ const int h_stride = width * w_stride;
+ const int h_low_ptr_offset = h_low * h_stride;
+ const int h_high_ptr_offset = h_low_ptr_offset + h_stride;
+ const int w_low_ptr_offset = w_low * w_stride;
+ const int w_high_ptr_offset = w_low_ptr_offset + w_stride;
+ const int base_ptr = m * channels + c;
+
+ const scalar_t w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw;
+ const scalar_t top_grad_value = top_grad * attn_weight;
+ scalar_t grad_h_weight = 0, grad_w_weight = 0;
+
+ scalar_t v1 = 0;
+ if (h_low >= 0 && w_low >= 0)
+ {
+ const int ptr1 = h_low_ptr_offset + w_low_ptr_offset + base_ptr;
+ v1 = bottom_data[ptr1];
+ grad_h_weight -= hw * v1;
+ grad_w_weight -= hh * v1;
+ atomicAdd(grad_value+ptr1, w1*top_grad_value);
+ }
+ scalar_t v2 = 0;
+ if (h_low >= 0 && w_high <= width - 1)
+ {
+ const int ptr2 = h_low_ptr_offset + w_high_ptr_offset + base_ptr;
+ v2 = bottom_data[ptr2];
+ grad_h_weight -= lw * v2;
+ grad_w_weight += hh * v2;
+ atomicAdd(grad_value+ptr2, w2*top_grad_value);
+ }
+ scalar_t v3 = 0;
+ if (h_high <= height - 1 && w_low >= 0)
+ {
+ const int ptr3 = h_high_ptr_offset + w_low_ptr_offset + base_ptr;
+ v3 = bottom_data[ptr3];
+ grad_h_weight += hw * v3;
+ grad_w_weight -= lh * v3;
+ atomicAdd(grad_value+ptr3, w3*top_grad_value);
+ }
+ scalar_t v4 = 0;
+ if (h_high <= height - 1 && w_high <= width - 1)
+ {
+ const int ptr4 = h_high_ptr_offset + w_high_ptr_offset + base_ptr;
+ v4 = bottom_data[ptr4];
+ grad_h_weight += lw * v4;
+ grad_w_weight += lh * v4;
+ atomicAdd(grad_value+ptr4, w4*top_grad_value);
+ }
+
+ const scalar_t val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4);
+ atomicAdd(grad_attn_weight, top_grad * val);
+ atomicAdd(grad_sampling_loc, width * grad_w_weight * top_grad_value);
+ atomicAdd(grad_sampling_loc + 1, height * grad_h_weight * top_grad_value);
+}
+
+
+template
+__global__ void ms_deformable_im2col_gpu_kernel(const int n,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *data_col)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ scalar_t *data_col_ptr = data_col + index;
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+ scalar_t col = 0;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const scalar_t *data_value_ptr = data_value + (data_value_ptr_init_offset + level_start_id * qid_stride);
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ col += ms_deform_attn_im2col_bilinear(data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col) * weight;
+ }
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ }
+ }
+ *data_col_ptr = col;
+ }
+}
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ __shared__ scalar_t cache_grad_sampling_loc[blockSize * 2];
+ __shared__ scalar_t cache_grad_attn_weight[blockSize];
+ unsigned int tid = threadIdx.x;
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;
+ *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;
+ *(cache_grad_attn_weight+threadIdx.x)=0;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);
+ }
+
+ __syncthreads();
+ if (tid == 0)
+ {
+ scalar_t _grad_w=cache_grad_sampling_loc[0], _grad_h=cache_grad_sampling_loc[1], _grad_a=cache_grad_attn_weight[0];
+ int sid=2;
+ for (unsigned int tid = 1; tid < blockSize; ++tid)
+ {
+ _grad_w += cache_grad_sampling_loc[sid];
+ _grad_h += cache_grad_sampling_loc[sid + 1];
+ _grad_a += cache_grad_attn_weight[tid];
+ sid += 2;
+ }
+
+
+ *grad_sampling_loc = _grad_w;
+ *(grad_sampling_loc + 1) = _grad_h;
+ *grad_attn_weight = _grad_a;
+ }
+ __syncthreads();
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ __shared__ scalar_t cache_grad_sampling_loc[blockSize * 2];
+ __shared__ scalar_t cache_grad_attn_weight[blockSize];
+ unsigned int tid = threadIdx.x;
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;
+ *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;
+ *(cache_grad_attn_weight+threadIdx.x)=0;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);
+ }
+
+ __syncthreads();
+
+ for (unsigned int s=blockSize/2; s>0; s>>=1)
+ {
+ if (tid < s) {
+ const unsigned int xid1 = tid << 1;
+ const unsigned int xid2 = (tid + s) << 1;
+ cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s];
+ cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2];
+ cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1];
+ }
+ __syncthreads();
+ }
+
+ if (tid == 0)
+ {
+ *grad_sampling_loc = cache_grad_sampling_loc[0];
+ *(grad_sampling_loc + 1) = cache_grad_sampling_loc[1];
+ *grad_attn_weight = cache_grad_attn_weight[0];
+ }
+ __syncthreads();
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v1(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ extern __shared__ int _s[];
+ scalar_t* cache_grad_sampling_loc = (scalar_t*)_s;
+ scalar_t* cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x;
+ unsigned int tid = threadIdx.x;
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;
+ *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;
+ *(cache_grad_attn_weight+threadIdx.x)=0;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);
+ }
+
+ __syncthreads();
+ if (tid == 0)
+ {
+ scalar_t _grad_w=cache_grad_sampling_loc[0], _grad_h=cache_grad_sampling_loc[1], _grad_a=cache_grad_attn_weight[0];
+ int sid=2;
+ for (unsigned int tid = 1; tid < blockDim.x; ++tid)
+ {
+ _grad_w += cache_grad_sampling_loc[sid];
+ _grad_h += cache_grad_sampling_loc[sid + 1];
+ _grad_a += cache_grad_attn_weight[tid];
+ sid += 2;
+ }
+
+
+ *grad_sampling_loc = _grad_w;
+ *(grad_sampling_loc + 1) = _grad_h;
+ *grad_attn_weight = _grad_a;
+ }
+ __syncthreads();
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v2(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ extern __shared__ int _s[];
+ scalar_t* cache_grad_sampling_loc = (scalar_t*)_s;
+ scalar_t* cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x;
+ unsigned int tid = threadIdx.x;
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;
+ *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;
+ *(cache_grad_attn_weight+threadIdx.x)=0;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);
+ }
+
+ __syncthreads();
+
+ for (unsigned int s=blockDim.x/2, spre=blockDim.x; s>0; s>>=1, spre>>=1)
+ {
+ if (tid < s) {
+ const unsigned int xid1 = tid << 1;
+ const unsigned int xid2 = (tid + s) << 1;
+ cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s];
+ cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2];
+ cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1];
+ if (tid + (s << 1) < spre)
+ {
+ cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + (s << 1)];
+ cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2 + (s << 1)];
+ cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1 + (s << 1)];
+ }
+ }
+ __syncthreads();
+ }
+
+ if (tid == 0)
+ {
+ *grad_sampling_loc = cache_grad_sampling_loc[0];
+ *(grad_sampling_loc + 1) = cache_grad_sampling_loc[1];
+ *grad_attn_weight = cache_grad_attn_weight[0];
+ }
+ __syncthreads();
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_shm_reduce_v2_multi_blocks(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ extern __shared__ int _s[];
+ scalar_t* cache_grad_sampling_loc = (scalar_t*)_s;
+ scalar_t* cache_grad_attn_weight = cache_grad_sampling_loc + 2 * blockDim.x;
+ unsigned int tid = threadIdx.x;
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ *(cache_grad_sampling_loc+(threadIdx.x << 1)) = 0;
+ *(cache_grad_sampling_loc+((threadIdx.x << 1) + 1)) = 0;
+ *(cache_grad_attn_weight+threadIdx.x)=0;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ cache_grad_sampling_loc+(threadIdx.x << 1), cache_grad_attn_weight+threadIdx.x);
+ }
+
+ __syncthreads();
+
+ for (unsigned int s=blockDim.x/2, spre=blockDim.x; s>0; s>>=1, spre>>=1)
+ {
+ if (tid < s) {
+ const unsigned int xid1 = tid << 1;
+ const unsigned int xid2 = (tid + s) << 1;
+ cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + s];
+ cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2];
+ cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1];
+ if (tid + (s << 1) < spre)
+ {
+ cache_grad_attn_weight[tid] += cache_grad_attn_weight[tid + (s << 1)];
+ cache_grad_sampling_loc[xid1] += cache_grad_sampling_loc[xid2 + (s << 1)];
+ cache_grad_sampling_loc[xid1 + 1] += cache_grad_sampling_loc[xid2 + 1 + (s << 1)];
+ }
+ }
+ __syncthreads();
+ }
+
+ if (tid == 0)
+ {
+ atomicAdd(grad_sampling_loc, cache_grad_sampling_loc[0]);
+ atomicAdd(grad_sampling_loc + 1, cache_grad_sampling_loc[1]);
+ atomicAdd(grad_attn_weight, cache_grad_attn_weight[0]);
+ }
+ __syncthreads();
+
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+
+template
+__global__ void ms_deformable_col2im_gpu_kernel_gm(const int n,
+ const scalar_t *grad_col,
+ const scalar_t *data_value,
+ const int64_t *data_spatial_shapes,
+ const int64_t *data_level_start_index,
+ const scalar_t *data_sampling_loc,
+ const scalar_t *data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t *grad_value,
+ scalar_t *grad_sampling_loc,
+ scalar_t *grad_attn_weight)
+{
+ CUDA_KERNEL_LOOP(index, n)
+ {
+ int _temp = index;
+ const int c_col = _temp % channels;
+ _temp /= channels;
+ const int sampling_index = _temp;
+ const int m_col = _temp % num_heads;
+ _temp /= num_heads;
+ const int q_col = _temp % num_query;
+ _temp /= num_query;
+ const int b_col = _temp;
+
+ const scalar_t top_grad = grad_col[index];
+
+ int data_weight_ptr = sampling_index * num_levels * num_point;
+ int data_loc_w_ptr = data_weight_ptr << 1;
+ const int grad_sampling_ptr = data_weight_ptr;
+ grad_sampling_loc += grad_sampling_ptr << 1;
+ grad_attn_weight += grad_sampling_ptr;
+ const int grad_weight_stride = 1;
+ const int grad_loc_stride = 2;
+ const int qid_stride = num_heads * channels;
+ const int data_value_ptr_init_offset = b_col * spatial_size * qid_stride;
+
+ for (int l_col=0; l_col < num_levels; ++l_col)
+ {
+ const int level_start_id = data_level_start_index[l_col];
+ const int spatial_h_ptr = l_col << 1;
+ const int spatial_h = data_spatial_shapes[spatial_h_ptr];
+ const int spatial_w = data_spatial_shapes[spatial_h_ptr + 1];
+ const int value_ptr_offset = data_value_ptr_init_offset + level_start_id * qid_stride;
+ const scalar_t *data_value_ptr = data_value + value_ptr_offset;
+ scalar_t *grad_value_ptr = grad_value + value_ptr_offset;
+
+ for (int p_col=0; p_col < num_point; ++p_col)
+ {
+ const scalar_t loc_w = data_sampling_loc[data_loc_w_ptr];
+ const scalar_t loc_h = data_sampling_loc[data_loc_w_ptr + 1];
+ const scalar_t weight = data_attn_weight[data_weight_ptr];
+
+ const scalar_t h_im = loc_h * spatial_h - 0.5;
+ const scalar_t w_im = loc_w * spatial_w - 0.5;
+ if (h_im > -1 && w_im > -1 && h_im < spatial_h && w_im < spatial_w)
+ {
+ ms_deform_attn_col2im_bilinear_gm(
+ data_value_ptr, spatial_h, spatial_w, num_heads, channels, h_im, w_im, m_col, c_col,
+ top_grad, weight, grad_value_ptr,
+ grad_sampling_loc, grad_attn_weight);
+ }
+ data_weight_ptr += 1;
+ data_loc_w_ptr += 2;
+ grad_attn_weight += grad_weight_stride;
+ grad_sampling_loc += grad_loc_stride;
+ }
+ }
+ }
+}
+
+
+template
+void ms_deformable_im2col_cuda(cudaStream_t stream,
+ const scalar_t* data_value,
+ const int64_t* data_spatial_shapes,
+ const int64_t* data_level_start_index,
+ const scalar_t* data_sampling_loc,
+ const scalar_t* data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t* data_col)
+{
+ const int num_kernels = batch_size * num_query * num_heads * channels;
+ const int num_actual_kernels = batch_size * num_query * num_heads * channels;
+ const int num_threads = CUDA_NUM_THREADS;
+ ms_deformable_im2col_gpu_kernel
+ <<>>(
+ num_kernels, data_value, data_spatial_shapes, data_level_start_index, data_sampling_loc, data_attn_weight,
+ batch_size, spatial_size, num_heads, channels, num_levels, num_query, num_point, data_col);
+
+ cudaError_t err = cudaGetLastError();
+ if (err != cudaSuccess)
+ {
+ printf("error in ms_deformable_im2col_cuda: %s\n", cudaGetErrorString(err));
+ }
+
+}
+
+template
+void ms_deformable_col2im_cuda(cudaStream_t stream,
+ const scalar_t* grad_col,
+ const scalar_t* data_value,
+ const int64_t * data_spatial_shapes,
+ const int64_t * data_level_start_index,
+ const scalar_t * data_sampling_loc,
+ const scalar_t * data_attn_weight,
+ const int batch_size,
+ const int spatial_size,
+ const int num_heads,
+ const int channels,
+ const int num_levels,
+ const int num_query,
+ const int num_point,
+ scalar_t* grad_value,
+ scalar_t* grad_sampling_loc,
+ scalar_t* grad_attn_weight)
+{
+ const int num_threads = (channels > CUDA_NUM_THREADS)?CUDA_NUM_THREADS:channels;
+ const int num_kernels = batch_size * num_query * num_heads * channels;
+ const int num_actual_kernels = batch_size * num_query * num_heads * channels;
+ if (channels > 1024)
+ {
+ if ((channels & 1023) == 0)
+ {
+ ms_deformable_col2im_gpu_kernel_shm_reduce_v2_multi_blocks
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ }
+ else
+ {
+ ms_deformable_col2im_gpu_kernel_gm
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ }
+ }
+ else{
+ switch(channels)
+ {
+ case 1:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 2:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 4:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 8:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 16:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 32:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 64:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 128:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 256:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 512:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ case 1024:
+ ms_deformable_col2im_gpu_kernel_shm_blocksize_aware_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ break;
+ default:
+ if (channels < 64)
+ {
+ ms_deformable_col2im_gpu_kernel_shm_reduce_v1
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ }
+ else
+ {
+ ms_deformable_col2im_gpu_kernel_shm_reduce_v2
+ <<>>(
+ num_kernels,
+ grad_col,
+ data_value,
+ data_spatial_shapes,
+ data_level_start_index,
+ data_sampling_loc,
+ data_attn_weight,
+ batch_size,
+ spatial_size,
+ num_heads,
+ channels,
+ num_levels,
+ num_query,
+ num_point,
+ grad_value,
+ grad_sampling_loc,
+ grad_attn_weight);
+ }
+ }
+ }
+ cudaError_t err = cudaGetLastError();
+ if (err != cudaSuccess)
+ {
+ printf("error in ms_deformable_col2im_cuda: %s\n", cudaGetErrorString(err));
+ }
+
+}
\ No newline at end of file
diff --git a/modeling_cuda/pixel_decoder/ops/src/ms_deform_attn.h b/modeling_cuda/pixel_decoder/ops/src/ms_deform_attn.h
new file mode 100644
index 0000000000000000000000000000000000000000..2f80a1b294c55b37d13bb3558ff7aeadba3b37de
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/src/ms_deform_attn.h
@@ -0,0 +1,67 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#pragma once
+
+#include "cpu/ms_deform_attn_cpu.h"
+
+#ifdef WITH_CUDA
+#include "cuda/ms_deform_attn_cuda.h"
+#endif
+
+
+at::Tensor
+ms_deform_attn_forward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const int im2col_step)
+{
+ if (value.type().is_cuda())
+ {
+#ifdef WITH_CUDA
+ return ms_deform_attn_cuda_forward(
+ value, spatial_shapes, level_start_index, sampling_loc, attn_weight, im2col_step);
+#else
+ AT_ERROR("Not compiled with GPU support");
+#endif
+ }
+ AT_ERROR("Not implemented on the CPU");
+}
+
+std::vector
+ms_deform_attn_backward(
+ const at::Tensor &value,
+ const at::Tensor &spatial_shapes,
+ const at::Tensor &level_start_index,
+ const at::Tensor &sampling_loc,
+ const at::Tensor &attn_weight,
+ const at::Tensor &grad_output,
+ const int im2col_step)
+{
+ if (value.type().is_cuda())
+ {
+#ifdef WITH_CUDA
+ return ms_deform_attn_cuda_backward(
+ value, spatial_shapes, level_start_index, sampling_loc, attn_weight, grad_output, im2col_step);
+#else
+ AT_ERROR("Not compiled with GPU support");
+#endif
+ }
+ AT_ERROR("Not implemented on the CPU");
+}
+
diff --git a/modeling_cuda/pixel_decoder/ops/src/vision.cpp b/modeling_cuda/pixel_decoder/ops/src/vision.cpp
new file mode 100644
index 0000000000000000000000000000000000000000..4a08821e0121a77556aa7a263ec8ebfa928b13b6
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/src/vision.cpp
@@ -0,0 +1,21 @@
+/*!
+**************************************************************************************************
+* Deformable DETR
+* Copyright (c) 2020 SenseTime. All Rights Reserved.
+* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+**************************************************************************************************
+* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+**************************************************************************************************
+*/
+
+/*!
+* Copyright (c) Facebook, Inc. and its affiliates.
+* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+*/
+
+#include "ms_deform_attn.h"
+
+PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
+ m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward");
+ m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward");
+}
diff --git a/modeling_cuda/pixel_decoder/ops/test.py b/modeling_cuda/pixel_decoder/ops/test.py
new file mode 100644
index 0000000000000000000000000000000000000000..6e1b545459f6fd3235767e721eb5a1090ae14bef
--- /dev/null
+++ b/modeling_cuda/pixel_decoder/ops/test.py
@@ -0,0 +1,92 @@
+# ------------------------------------------------------------------------------------------------
+# Deformable DETR
+# Copyright (c) 2020 SenseTime. All Rights Reserved.
+# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
+# ------------------------------------------------------------------------------------------------
+# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
+# ------------------------------------------------------------------------------------------------
+
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
+
+from __future__ import absolute_import
+from __future__ import print_function
+from __future__ import division
+
+import time
+import torch
+import torch.nn as nn
+from torch.autograd import gradcheck
+
+from functions.ms_deform_attn_func import MSDeformAttnFunction, ms_deform_attn_core_pytorch
+
+
+N, M, D = 1, 2, 2
+Lq, L, P = 2, 2, 2
+shapes = torch.as_tensor([(6, 4), (3, 2)], dtype=torch.long).cuda()
+level_start_index = torch.cat((shapes.new_zeros((1, )), shapes.prod(1).cumsum(0)[:-1]))
+S = sum([(H*W).item() for H, W in shapes])
+
+
+torch.manual_seed(3)
+
+
+@torch.no_grad()
+def check_forward_equal_with_pytorch_double():
+ value = torch.rand(N, S, M, D).cuda() * 0.01
+ sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()
+ attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5
+ attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)
+ im2col_step = 2
+ output_pytorch = ms_deform_attn_core_pytorch(value.double(), shapes, sampling_locations.double(), attention_weights.double()).detach().cpu()
+ output_cuda = MSDeformAttnFunction.apply(value.double(), shapes, level_start_index, sampling_locations.double(), attention_weights.double(), im2col_step).detach().cpu()
+ fwdok = torch.allclose(output_cuda, output_pytorch)
+ max_abs_err = (output_cuda - output_pytorch).abs().max()
+ max_rel_err = ((output_cuda - output_pytorch).abs() / output_pytorch.abs()).max()
+
+ print(f'* {fwdok} check_forward_equal_with_pytorch_double: max_abs_err {max_abs_err:.2e} max_rel_err {max_rel_err:.2e}')
+
+
+@torch.no_grad()
+def check_forward_equal_with_pytorch_float():
+ value = torch.rand(N, S, M, D).cuda() * 0.01
+ sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()
+ attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5
+ attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)
+ im2col_step = 2
+ output_pytorch = ms_deform_attn_core_pytorch(value, shapes, sampling_locations, attention_weights).detach().cpu()
+ output_cuda = MSDeformAttnFunction.apply(value, shapes, level_start_index, sampling_locations, attention_weights, im2col_step).detach().cpu()
+ fwdok = torch.allclose(output_cuda, output_pytorch, rtol=1e-2, atol=1e-3)
+ max_abs_err = (output_cuda - output_pytorch).abs().max()
+ max_rel_err = ((output_cuda - output_pytorch).abs() / output_pytorch.abs()).max()
+
+ print(f'* {fwdok} check_forward_equal_with_pytorch_float: max_abs_err {max_abs_err:.2e} max_rel_err {max_rel_err:.2e}')
+
+
+def check_gradient_numerical(channels=4, grad_value=True, grad_sampling_loc=True, grad_attn_weight=True):
+
+ value = torch.rand(N, S, M, channels).cuda() * 0.01
+ sampling_locations = torch.rand(N, Lq, M, L, P, 2).cuda()
+ attention_weights = torch.rand(N, Lq, M, L, P).cuda() + 1e-5
+ attention_weights /= attention_weights.sum(-1, keepdim=True).sum(-2, keepdim=True)
+ im2col_step = 2
+ func = MSDeformAttnFunction.apply
+
+ value.requires_grad = grad_value
+ sampling_locations.requires_grad = grad_sampling_loc
+ attention_weights.requires_grad = grad_attn_weight
+
+ gradok = gradcheck(func, (value.double(), shapes, level_start_index, sampling_locations.double(), attention_weights.double(), im2col_step))
+
+ print(f'* {gradok} check_gradient_numerical(D={channels})')
+
+
+if __name__ == '__main__':
+ check_forward_equal_with_pytorch_double()
+ check_forward_equal_with_pytorch_float()
+
+ for channels in [30, 32, 64, 71, 1025, 2048, 3096]:
+ check_gradient_numerical(channels, True, True, True)
+
+
+
diff --git a/modeling_cuda/transformer_decoder/__init__.py b/modeling_cuda/transformer_decoder/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..ddcf38e78f3bbb2380b0a246000bcb5e5b385619
--- /dev/null
+++ b/modeling_cuda/transformer_decoder/__init__.py
@@ -0,0 +1,3 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+from .maskformer_transformer_decoder import StandardTransformerDecoder
+from .mask2former_transformer_decoder import MultiScaleMaskedTransformerDecoder
diff --git a/modeling_cuda/transformer_decoder/__pycache__/__init__.cpython-37.pyc b/modeling_cuda/transformer_decoder/__pycache__/__init__.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..80cc5721977a1809436ab204c76b05f71f48647e
Binary files /dev/null and b/modeling_cuda/transformer_decoder/__pycache__/__init__.cpython-37.pyc differ
diff --git a/modeling_cuda/transformer_decoder/__pycache__/mask2former_transformer_decoder.cpython-37.pyc b/modeling_cuda/transformer_decoder/__pycache__/mask2former_transformer_decoder.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..40c35544085173a2499b19ff198777517bc76f80
Binary files /dev/null and b/modeling_cuda/transformer_decoder/__pycache__/mask2former_transformer_decoder.cpython-37.pyc differ
diff --git a/modeling_cuda/transformer_decoder/__pycache__/maskformer_transformer_decoder.cpython-37.pyc b/modeling_cuda/transformer_decoder/__pycache__/maskformer_transformer_decoder.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..937753fd894ffbfa95b1d0e2ff16e45364f1e736
Binary files /dev/null and b/modeling_cuda/transformer_decoder/__pycache__/maskformer_transformer_decoder.cpython-37.pyc differ
diff --git a/modeling_cuda/transformer_decoder/__pycache__/position_encoding.cpython-37.pyc b/modeling_cuda/transformer_decoder/__pycache__/position_encoding.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..8756a4cd36ddf6c4acd98d1cb90f33ed9dd31373
Binary files /dev/null and b/modeling_cuda/transformer_decoder/__pycache__/position_encoding.cpython-37.pyc differ
diff --git a/modeling_cuda/transformer_decoder/__pycache__/transformer.cpython-37.pyc b/modeling_cuda/transformer_decoder/__pycache__/transformer.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e9e3a600030889acd436fa3d559a98845877ace3
Binary files /dev/null and b/modeling_cuda/transformer_decoder/__pycache__/transformer.cpython-37.pyc differ
diff --git a/modeling_cuda/transformer_decoder/mask2former_transformer_decoder.py b/modeling_cuda/transformer_decoder/mask2former_transformer_decoder.py
new file mode 100644
index 0000000000000000000000000000000000000000..f7cf30ff98dea67aa734b0b242826c450ccf7fda
--- /dev/null
+++ b/modeling_cuda/transformer_decoder/mask2former_transformer_decoder.py
@@ -0,0 +1,387 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/detr.py
+import fvcore.nn.weight_init as weight_init
+from typing import Optional
+import torch
+from torch import nn, Tensor
+from torch.nn import functional as F
+
+from .position_encoding import PositionEmbeddingSine
+
+class SelfAttentionLayer(nn.Module):
+
+ def __init__(self, d_model, nhead, dropout=0.0,
+ activation="relu", normalize_before=False):
+ super().__init__()
+ self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
+
+ self.norm = nn.LayerNorm(d_model)
+ self.dropout = nn.Dropout(dropout)
+
+ self.activation = _get_activation_fn(activation)
+ self.normalize_before = normalize_before
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ nn.init.xavier_uniform_(p)
+
+ def with_pos_embed(self, tensor, pos: Optional[Tensor]):
+ return tensor if pos is None else tensor + pos
+
+ def forward_post(self, tgt,
+ tgt_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ q = k = self.with_pos_embed(tgt, query_pos)
+ tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask,
+ key_padding_mask=tgt_key_padding_mask)[0]
+ tgt = tgt + self.dropout(tgt2)
+ tgt = self.norm(tgt)
+
+ return tgt
+
+ def forward_pre(self, tgt,
+ tgt_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ tgt2 = self.norm(tgt)
+ q = k = self.with_pos_embed(tgt2, query_pos)
+ tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask,
+ key_padding_mask=tgt_key_padding_mask)[0]
+ tgt = tgt + self.dropout(tgt2)
+
+ return tgt
+
+ def forward(self, tgt,
+ tgt_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ if self.normalize_before:
+ return self.forward_pre(tgt, tgt_mask,
+ tgt_key_padding_mask, query_pos)
+ return self.forward_post(tgt, tgt_mask,
+ tgt_key_padding_mask, query_pos)
+
+
+class CrossAttentionLayer(nn.Module):
+
+ def __init__(self, d_model, nhead, dropout=0.0,
+ activation="relu", normalize_before=False):
+ super().__init__()
+ self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
+
+ self.norm = nn.LayerNorm(d_model)
+ self.dropout = nn.Dropout(dropout)
+
+ self.activation = _get_activation_fn(activation)
+ self.normalize_before = normalize_before
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ nn.init.xavier_uniform_(p)
+
+ def with_pos_embed(self, tensor, pos: Optional[Tensor]):
+ return tensor if pos is None else tensor + pos
+
+ def forward_post(self, tgt, memory,
+ memory_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos),
+ key=self.with_pos_embed(memory, pos),
+ value=memory, attn_mask=memory_mask,
+ key_padding_mask=memory_key_padding_mask)[0]
+ tgt = tgt + self.dropout(tgt2)
+ tgt = self.norm(tgt)
+
+ return tgt
+
+ def forward_pre(self, tgt, memory,
+ memory_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ tgt2 = self.norm(tgt)
+ tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos),
+ key=self.with_pos_embed(memory, pos),
+ value=memory, attn_mask=memory_mask,
+ key_padding_mask=memory_key_padding_mask)[0]
+ tgt = tgt + self.dropout(tgt2)
+
+ return tgt
+
+ def forward(self, tgt, memory,
+ memory_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None):
+ if self.normalize_before:
+ return self.forward_pre(tgt, memory, memory_mask,
+ memory_key_padding_mask, pos, query_pos)
+ return self.forward_post(tgt, memory, memory_mask,
+ memory_key_padding_mask, pos, query_pos)
+
+
+class FFNLayer(nn.Module):
+
+ def __init__(self, d_model, dim_feedforward=2048, dropout=0.0,
+ activation="relu", normalize_before=False):
+ super().__init__()
+ # Implementation of Feedforward model
+ self.linear1 = nn.Linear(d_model, dim_feedforward)
+ self.dropout = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(dim_feedforward, d_model)
+
+ self.norm = nn.LayerNorm(d_model)
+
+ self.activation = _get_activation_fn(activation)
+ self.normalize_before = normalize_before
+
+ self._reset_parameters()
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ nn.init.xavier_uniform_(p)
+
+ def with_pos_embed(self, tensor, pos: Optional[Tensor]):
+ return tensor if pos is None else tensor + pos
+
+ def forward_post(self, tgt):
+ tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
+ tgt = tgt + self.dropout(tgt2)
+ tgt = self.norm(tgt)
+ return tgt
+
+ def forward_pre(self, tgt):
+ tgt2 = self.norm(tgt)
+ tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
+ tgt = tgt + self.dropout(tgt2)
+ return tgt
+
+ def forward(self, tgt):
+ if self.normalize_before:
+ return self.forward_pre(tgt)
+ return self.forward_post(tgt)
+
+
+def _get_activation_fn(activation):
+ """Return an activation function given a string"""
+ if activation == "relu":
+ return F.relu
+ if activation == "gelu":
+ return F.gelu
+ if activation == "glu":
+ return F.glu
+ raise RuntimeError(F"activation should be relu/gelu, not {activation}.")
+
+
+class MLP(nn.Module):
+ """ Very simple multi-layer perceptron (also called FFN)"""
+
+ def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
+ super().__init__()
+ self.num_layers = num_layers
+ h = [hidden_dim] * (num_layers - 1)
+ self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
+
+ def forward(self, x):
+ for i, layer in enumerate(self.layers):
+ x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
+ return x
+
+
+class MultiScaleMaskedTransformerDecoder(nn.Module):
+ def __init__(
+ self,
+ in_channels,
+ num_classes,
+ mask_classification=True,
+ hidden_dim=256,
+ num_queries=100,
+ nheads=8,
+ dim_feedforward=2048,
+ dec_layers=10,
+ pre_norm=False,
+ mask_dim=256,
+ enforce_input_project=False
+ ):
+ super().__init__()
+
+ assert mask_classification, "Only support mask classification model"
+ self.mask_classification = mask_classification
+
+ # positional encoding
+ N_steps = hidden_dim // 2
+ self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
+
+ # define Transformer decoder here
+ self.num_heads = nheads
+ self.num_layers = dec_layers
+ self.transformer_self_attention_layers = nn.ModuleList()
+ self.transformer_cross_attention_layers = nn.ModuleList()
+ self.transformer_ffn_layers = nn.ModuleList()
+
+ for _ in range(self.num_layers):
+ self.transformer_self_attention_layers.append(
+ SelfAttentionLayer(
+ d_model=hidden_dim,
+ nhead=nheads,
+ dropout=0.0,
+ normalize_before=pre_norm,
+ )
+ )
+
+ self.transformer_cross_attention_layers.append(
+ CrossAttentionLayer(
+ d_model=hidden_dim,
+ nhead=nheads,
+ dropout=0.0,
+ normalize_before=pre_norm,
+ )
+ )
+
+ self.transformer_ffn_layers.append(
+ FFNLayer(
+ d_model=hidden_dim,
+ dim_feedforward=dim_feedforward,
+ dropout=0.0,
+ normalize_before=pre_norm,
+ )
+ )
+
+ self.decoder_norm = nn.LayerNorm(hidden_dim)
+
+ self.num_queries = num_queries
+ # learnable query features
+ self.query_feat = nn.Embedding(num_queries, hidden_dim)
+ # learnable query p.e.
+ self.query_embed = nn.Embedding(num_queries, hidden_dim)
+
+ # level embedding (we always use 3 scales)
+ self.num_feature_levels = 3
+ self.level_embed = nn.Embedding(self.num_feature_levels, hidden_dim)
+ self.input_proj = nn.ModuleList()
+ for _ in range(self.num_feature_levels):
+ if in_channels != hidden_dim or enforce_input_project:
+ self.input_proj.append(nn.Conv2d(in_channels, hidden_dim, kernel_size=1))
+ weight_init.c2_xavier_fill(self.input_proj[-1])
+ else:
+ self.input_proj.append(nn.Sequential())
+
+ # output FFNs
+ if self.mask_classification:
+ self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
+ self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3)
+
+ def forward(self, x, mask_features, mask = None):
+ #print(mask_features.shape, "!!")
+ # x is a list of multi-scale feature
+ assert len(x) == self.num_feature_levels
+ src = []
+ pos = []
+ size_list = []
+
+ # disable mask, it does not affect performance
+ del mask
+
+ for i in range(self.num_feature_levels):
+ size_list.append(x[i].shape[-2:])
+ pos.append(self.pe_layer(x[i], None).flatten(2))
+ src.append(self.input_proj[i](x[i]).flatten(2) + self.level_embed.weight[i][None, :, None])
+
+ # flatten NxCxHxW to HWxNxC
+ pos[-1] = pos[-1].permute(2, 0, 1)
+ src[-1] = src[-1].permute(2, 0, 1)
+
+ _, bs, _ = src[0].shape
+
+ # QxNxC
+ query_embed = self.query_embed.weight.unsqueeze(1).repeat(1, bs, 1)
+ output = self.query_feat.weight.unsqueeze(1).repeat(1, bs, 1)
+
+ predictions_class = []
+ predictions_mask = []
+
+ # prediction heads on learnable query features
+ outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[0])
+ predictions_class.append(outputs_class)
+ predictions_mask.append(outputs_mask)
+
+ for i in range(self.num_layers):
+ level_index = i % self.num_feature_levels
+ attn_mask[torch.where(attn_mask.sum(-1) == attn_mask.shape[-1])] = False
+ # attention: cross-attention first
+ output = self.transformer_cross_attention_layers[i](
+ output, src[level_index],
+ memory_mask=attn_mask,
+ memory_key_padding_mask=None, # here we do not apply masking on padded region
+ pos=pos[level_index], query_pos=query_embed
+ )
+
+ output = self.transformer_self_attention_layers[i](
+ output, tgt_mask=None,
+ tgt_key_padding_mask=None,
+ query_pos=query_embed
+ )
+
+ # FFN
+ output = self.transformer_ffn_layers[i](
+ output
+ )
+ #print(output.shape, "??")
+
+ outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[(i + 1) % self.num_feature_levels])
+ predictions_class.append(outputs_class)
+ predictions_mask.append(outputs_mask)
+
+ assert len(predictions_class) == self.num_layers + 1
+
+ out = {
+ 'pred_logits': predictions_class[-1],
+ 'pred_masks': predictions_mask[-1],
+ 'aux_outputs': self._set_aux_loss(
+ predictions_class if self.mask_classification else None, predictions_mask
+ )
+ }
+ return out
+
+ def forward_prediction_heads(self, output, mask_features, attn_mask_target_size):
+ decoder_output = self.decoder_norm(output)
+ decoder_output = decoder_output.transpose(0, 1)
+ outputs_class = self.class_embed(decoder_output)
+ mask_embed = self.mask_embed(decoder_output)
+ outputs_mask = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features)
+
+ # NOTE: prediction is of higher-resolution
+ # [B, Q, H, W] -> [B, Q, H*W] -> [B, h, Q, H*W] -> [B*h, Q, HW]
+ attn_mask = F.interpolate(outputs_mask, size=attn_mask_target_size, mode="bilinear", align_corners=False)
+ # must use bool type
+ # If a BoolTensor is provided, positions with ``True`` are not allowed to attend while ``False`` values will be unchanged.
+ #print('before', attn_mask.shape)
+ #print(self.num_heads)
+ attn_mask = (attn_mask.sigmoid().flatten(2).unsqueeze(1).repeat(1, self.num_heads, 1, 1).flatten(0, 1) < 0.5).bool()
+ #print('after', attn_mask.shape)
+ attn_mask = attn_mask.detach()
+
+ return outputs_class, outputs_mask, attn_mask
+
+ @torch.jit.unused
+ def _set_aux_loss(self, outputs_class, outputs_seg_masks):
+ # this is a workaround to make torchscript happy, as torchscript
+ # doesn't support dictionary with non-homogeneous values, such
+ # as a dict having both a Tensor and a list.
+ if self.mask_classification:
+ return [
+ {"pred_logits": a, "pred_masks": b}
+ for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1])
+ ]
+ else:
+ return [{"pred_masks": b} for b in outputs_seg_masks[:-1]]
diff --git a/modeling_cuda/transformer_decoder/maskformer_transformer_decoder.py b/modeling_cuda/transformer_decoder/maskformer_transformer_decoder.py
new file mode 100644
index 0000000000000000000000000000000000000000..458f53321f42471005bd5f466ed992c245ffff4b
--- /dev/null
+++ b/modeling_cuda/transformer_decoder/maskformer_transformer_decoder.py
@@ -0,0 +1,123 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/detr.py
+import fvcore.nn.weight_init as weight_init
+import torch
+from torch import nn
+from torch.nn import functional as F
+
+from .position_encoding import PositionEmbeddingSine
+from .transformer import Transformer
+
+
+class StandardTransformerDecoder(nn.Module):
+ def __init__(
+ self,
+ in_channels,
+ num_classes,
+ mask_classification=True,
+ hidden_dim=256,
+ num_queries=100,
+ nheads=8,
+ dropout=0.0,
+ dim_feedforward=2048,
+ enc_layers=0,
+ dec_layers=10,
+ pre_norm=False,
+ deep_supervision=True,
+ mask_dim=256,
+ enforce_input_project=False
+ ):
+ super().__init__()
+ self.mask_classification = mask_classification
+ # positional encoding
+ N_steps = hidden_dim // 2
+ self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
+
+ transformer = Transformer(
+ d_model=hidden_dim,
+ dropout=dropout,
+ nhead=nheads,
+ dim_feedforward=dim_feedforward,
+ num_encoder_layers=enc_layers,
+ num_decoder_layers=dec_layers,
+ normalize_before=pre_norm,
+ return_intermediate_dec=deep_supervision,
+ )
+
+ self.num_queries = num_queries
+ self.transformer = transformer
+ hidden_dim = transformer.d_model
+
+ self.query_embed = nn.Embedding(num_queries, hidden_dim)
+
+ if in_channels != hidden_dim or enforce_input_project:
+ self.input_proj = nn.Conv3d(in_channels, hidden_dim, kernel_size=1)
+ weight_init.c2_xavier_fill(self.input_proj)
+ else:
+ self.input_proj = nn.Sequential()
+ self.aux_loss = deep_supervision
+
+ # output FFNs
+ if self.mask_classification:
+ self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
+ self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3)
+
+ def forward(self, x, mask_features, mask=None):
+ if mask is not None:
+ mask = F.interpolate(mask[None].float(), size=x.shape[-2:]).to(torch.bool)[0]
+ pos = self.pe_layer(x, mask)
+
+ src = x
+ hs, memory = self.transformer(self.input_proj(src), mask, self.query_embed.weight, pos)
+
+ if self.mask_classification:
+ outputs_class = self.class_embed(hs)
+ out = {"pred_logits": outputs_class[-1]}
+ else:
+ out = {}
+
+ if self.aux_loss:
+ # [l, bs, queries, embed]
+ mask_embed = self.mask_embed(hs)
+ outputs_seg_masks = torch.einsum("lbqc,bchw->lbqhw", mask_embed, mask_features)
+ out["pred_masks"] = outputs_seg_masks[-1]
+ out["aux_outputs"] = self._set_aux_loss(
+ outputs_class if self.mask_classification else None, outputs_seg_masks
+ )
+ else:
+ # FIXME h_boxes takes the last one computed, keep this in mind
+ # [bs, queries, embed]
+ mask_embed = self.mask_embed(hs[-1])
+ outputs_seg_masks = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features)
+ out["pred_masks"] = outputs_seg_masks
+ return out
+
+ @torch.jit.unused
+ def _set_aux_loss(self, outputs_class, outputs_seg_masks):
+ # this is a workaround to make torchscript happy, as torchscript
+ # doesn't support dictionary with non-homogeneous values, such
+ # as a dict having both a Tensor and a list.
+ if self.mask_classification:
+ return [
+ {"pred_logits": a, "pred_masks": b}
+ for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1])
+ ]
+ else:
+ return [{"pred_masks": b} for b in outputs_seg_masks[:-1]]
+
+
+class MLP(nn.Module):
+ """Very simple multi-layer perceptron (also called FFN)"""
+
+ def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
+ super().__init__()
+ self.num_layers = num_layers
+ h = [hidden_dim] * (num_layers - 1)
+ self.layers = nn.ModuleList(
+ nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim])
+ )
+
+ def forward(self, x):
+ for i, layer in enumerate(self.layers):
+ x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
+ return x
diff --git a/modeling_cuda/transformer_decoder/position_encoding.py b/modeling_cuda/transformer_decoder/position_encoding.py
new file mode 100644
index 0000000000000000000000000000000000000000..c27d3c46b260252c9d0d94f15b900e7615032c0f
--- /dev/null
+++ b/modeling_cuda/transformer_decoder/position_encoding.py
@@ -0,0 +1,64 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# # Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/position_encoding.py
+"""
+Various positional encodings for the transformer.
+"""
+import math
+
+import torch
+from torch import nn
+
+
+class PositionEmbeddingSine(nn.Module):
+ """
+ This is a more standard version of the position embedding, very similar to the one
+ used by the Attention is all you need paper, generalized to work on images.
+ """
+
+ def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
+ super().__init__()
+ self.num_pos_feats = num_pos_feats
+ self.temperature = temperature
+ self.normalize = normalize
+ if scale is not None and normalize is False:
+ raise ValueError("normalize should be True if scale is passed")
+ if scale is None:
+ scale = 2 * math.pi
+ self.scale = scale
+
+ def forward(self, x, mask=None):
+ if mask is None:
+ mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool)
+ not_mask = ~mask
+ y_embed = not_mask.cumsum(1, dtype=torch.float32)
+ x_embed = not_mask.cumsum(2, dtype=torch.float32)
+ if self.normalize:
+ eps = 1e-6
+ y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
+ x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
+
+ dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device)
+ dim_t = self.temperature ** (2 * (torch.div(dim_t, 2, rounding_mode='floor')) / self.num_pos_feats)
+
+ pos_x = x_embed[:, :, :, None] / dim_t
+ pos_y = y_embed[:, :, :, None] / dim_t
+ pos_x = torch.stack(
+ (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
+ ).flatten(3)
+ pos_y = torch.stack(
+ (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
+ ).flatten(3)
+ pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
+ return pos
+
+ def __repr__(self, _repr_indent=4):
+ head = "Positional encoding " + self.__class__.__name__
+ body = [
+ "num_pos_feats: {}".format(self.num_pos_feats),
+ "temperature: {}".format(self.temperature),
+ "normalize: {}".format(self.normalize),
+ "scale: {}".format(self.scale),
+ ]
+ # _repr_indent = 4
+ lines = [head] + [" " * _repr_indent + line for line in body]
+ return "\n".join(lines)
\ No newline at end of file
diff --git a/modeling_cuda/transformer_decoder/transformer.py b/modeling_cuda/transformer_decoder/transformer.py
new file mode 100644
index 0000000000000000000000000000000000000000..ea8caa0108f5e136a9739320ab69a3e1b6f40298
--- /dev/null
+++ b/modeling_cuda/transformer_decoder/transformer.py
@@ -0,0 +1,369 @@
+# Copyright (c) Facebook, Inc. and its affiliates.
+# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/transformer.py
+"""
+Transformer class.
+
+Copy-paste from torch.nn.Transformer with modifications:
+ * positional encodings are passed in MHattention
+ * extra LN at the end of encoder is removed
+ * decoder returns a stack of activations from all decoding layers
+"""
+import copy
+from typing import List, Optional
+
+import torch
+import torch.nn.functional as F
+from torch import Tensor, nn
+
+
+class Transformer(nn.Module):
+ def __init__(
+ self,
+ d_model=512,
+ nhead=8,
+ num_encoder_layers=6,
+ num_decoder_layers=6,
+ dim_feedforward=2048,
+ dropout=0.1,
+ activation="relu",
+ normalize_before=False,
+ return_intermediate_dec=False,
+ ):
+ super().__init__()
+
+ encoder_layer = TransformerEncoderLayer(
+ d_model, nhead, dim_feedforward, dropout, activation, normalize_before
+ )
+ encoder_norm = nn.LayerNorm(d_model) if normalize_before else None
+ self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm)
+
+ decoder_layer = TransformerDecoderLayer(
+ d_model, nhead, dim_feedforward, dropout, activation, normalize_before
+ )
+ decoder_norm = nn.LayerNorm(d_model)
+ self.decoder = TransformerDecoder(
+ decoder_layer,
+ num_decoder_layers,
+ decoder_norm,
+ return_intermediate=return_intermediate_dec,
+ )
+
+ self._reset_parameters()
+
+ self.d_model = d_model
+ self.nhead = nhead
+
+ def _reset_parameters(self):
+ for p in self.parameters():
+ if p.dim() > 1:
+ nn.init.xavier_uniform_(p)
+
+ def forward(self, src, mask, query_embed, pos_embed):
+ # flatten NxCxHxW to HWxNxC
+ bs, c, h, w = src.shape
+ src = src.flatten(2).permute(2, 0, 1)
+ pos_embed = pos_embed.flatten(2).permute(2, 0, 1)
+ query_embed = query_embed.unsqueeze(1).repeat(1, bs, 1)
+ if mask is not None:
+ mask = mask.flatten(1)
+
+ tgt = torch.zeros_like(query_embed)
+ memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed)
+ hs = self.decoder(
+ tgt, memory, memory_key_padding_mask=mask, pos=pos_embed, query_pos=query_embed
+ )
+ return hs.transpose(1, 2), memory.permute(1, 2, 0).view(bs, c, h, w)
+
+
+class TransformerEncoder(nn.Module):
+ def __init__(self, encoder_layer, num_layers, norm=None):
+ super().__init__()
+ self.layers = _get_clones(encoder_layer, num_layers)
+ self.num_layers = num_layers
+ self.norm = norm
+
+ def forward(
+ self,
+ src,
+ mask: Optional[Tensor] = None,
+ src_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ ):
+ output = src
+
+ for layer in self.layers:
+ output = layer(
+ output, src_mask=mask, src_key_padding_mask=src_key_padding_mask, pos=pos
+ )
+
+ if self.norm is not None:
+ output = self.norm(output)
+
+ return output
+
+
+class TransformerDecoder(nn.Module):
+ def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False):
+ super().__init__()
+ self.layers = _get_clones(decoder_layer, num_layers)
+ self.num_layers = num_layers
+ self.norm = norm
+ self.return_intermediate = return_intermediate
+
+ def forward(
+ self,
+ tgt,
+ memory,
+ tgt_mask: Optional[Tensor] = None,
+ memory_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None,
+ ):
+ output = tgt
+
+ intermediate = []
+
+ for layer in self.layers:
+ output = layer(
+ output,
+ memory,
+ tgt_mask=tgt_mask,
+ memory_mask=memory_mask,
+ tgt_key_padding_mask=tgt_key_padding_mask,
+ memory_key_padding_mask=memory_key_padding_mask,
+ pos=pos,
+ query_pos=query_pos,
+ )
+ if self.return_intermediate:
+ intermediate.append(self.norm(output))
+
+ if self.norm is not None:
+ output = self.norm(output)
+ if self.return_intermediate:
+ intermediate.pop()
+ intermediate.append(output)
+
+ if self.return_intermediate:
+ return torch.stack(intermediate)
+
+ return output.unsqueeze(0)
+
+
+class TransformerEncoderLayer(nn.Module):
+ def __init__(
+ self,
+ d_model,
+ nhead,
+ dim_feedforward=2048,
+ dropout=0.1,
+ activation="relu",
+ normalize_before=False,
+ ):
+ super().__init__()
+ self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
+ # Implementation of Feedforward model
+ self.linear1 = nn.Linear(d_model, dim_feedforward)
+ self.dropout = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(dim_feedforward, d_model)
+
+ self.norm1 = nn.LayerNorm(d_model)
+ self.norm2 = nn.LayerNorm(d_model)
+ self.dropout1 = nn.Dropout(dropout)
+ self.dropout2 = nn.Dropout(dropout)
+
+ self.activation = _get_activation_fn(activation)
+ self.normalize_before = normalize_before
+
+ def with_pos_embed(self, tensor, pos: Optional[Tensor]):
+ return tensor if pos is None else tensor + pos
+
+ def forward_post(
+ self,
+ src,
+ src_mask: Optional[Tensor] = None,
+ src_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ ):
+ q = k = self.with_pos_embed(src, pos)
+ src2 = self.self_attn(
+ q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask
+ )[0]
+ src = src + self.dropout1(src2)
+ src = self.norm1(src)
+ src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
+ src = src + self.dropout2(src2)
+ src = self.norm2(src)
+ return src
+
+ def forward_pre(
+ self,
+ src,
+ src_mask: Optional[Tensor] = None,
+ src_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ ):
+ src2 = self.norm1(src)
+ q = k = self.with_pos_embed(src2, pos)
+ src2 = self.self_attn(
+ q, k, value=src2, attn_mask=src_mask, key_padding_mask=src_key_padding_mask
+ )[0]
+ src = src + self.dropout1(src2)
+ src2 = self.norm2(src)
+ src2 = self.linear2(self.dropout(self.activation(self.linear1(src2))))
+ src = src + self.dropout2(src2)
+ return src
+
+ def forward(
+ self,
+ src,
+ src_mask: Optional[Tensor] = None,
+ src_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ ):
+ if self.normalize_before:
+ return self.forward_pre(src, src_mask, src_key_padding_mask, pos)
+ return self.forward_post(src, src_mask, src_key_padding_mask, pos)
+
+
+class TransformerDecoderLayer(nn.Module):
+ def __init__(
+ self,
+ d_model,
+ nhead,
+ dim_feedforward=2048,
+ dropout=0.1,
+ activation="relu",
+ normalize_before=False,
+ ):
+ super().__init__()
+ self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
+ self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
+ # Implementation of Feedforward model
+ self.linear1 = nn.Linear(d_model, dim_feedforward)
+ self.dropout = nn.Dropout(dropout)
+ self.linear2 = nn.Linear(dim_feedforward, d_model)
+
+ self.norm1 = nn.LayerNorm(d_model)
+ self.norm2 = nn.LayerNorm(d_model)
+ self.norm3 = nn.LayerNorm(d_model)
+ self.dropout1 = nn.Dropout(dropout)
+ self.dropout2 = nn.Dropout(dropout)
+ self.dropout3 = nn.Dropout(dropout)
+
+ self.activation = _get_activation_fn(activation)
+ self.normalize_before = normalize_before
+
+ def with_pos_embed(self, tensor, pos: Optional[Tensor]):
+ return tensor if pos is None else tensor + pos
+
+ def forward_post(
+ self,
+ tgt,
+ memory,
+ tgt_mask: Optional[Tensor] = None,
+ memory_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None,
+ ):
+ q = k = self.with_pos_embed(tgt, query_pos)
+ tgt2 = self.self_attn(
+ q, k, value=tgt, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask
+ )[0]
+ tgt = tgt + self.dropout1(tgt2)
+ tgt = self.norm1(tgt)
+ tgt2 = self.multihead_attn(
+ query=self.with_pos_embed(tgt, query_pos),
+ key=self.with_pos_embed(memory, pos),
+ value=memory,
+ attn_mask=memory_mask,
+ key_padding_mask=memory_key_padding_mask,
+ )[0]
+ tgt = tgt + self.dropout2(tgt2)
+ tgt = self.norm2(tgt)
+ tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
+ tgt = tgt + self.dropout3(tgt2)
+ tgt = self.norm3(tgt)
+ return tgt
+
+ def forward_pre(
+ self,
+ tgt,
+ memory,
+ tgt_mask: Optional[Tensor] = None,
+ memory_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None,
+ ):
+ tgt2 = self.norm1(tgt)
+ q = k = self.with_pos_embed(tgt2, query_pos)
+ tgt2 = self.self_attn(
+ q, k, value=tgt2, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask
+ )[0]
+ tgt = tgt + self.dropout1(tgt2)
+ tgt2 = self.norm2(tgt)
+ tgt2 = self.multihead_attn(
+ query=self.with_pos_embed(tgt2, query_pos),
+ key=self.with_pos_embed(memory, pos),
+ value=memory,
+ attn_mask=memory_mask,
+ key_padding_mask=memory_key_padding_mask,
+ )[0]
+ tgt = tgt + self.dropout2(tgt2)
+ tgt2 = self.norm3(tgt)
+ tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
+ tgt = tgt + self.dropout3(tgt2)
+ return tgt
+
+ def forward(
+ self,
+ tgt,
+ memory,
+ tgt_mask: Optional[Tensor] = None,
+ memory_mask: Optional[Tensor] = None,
+ tgt_key_padding_mask: Optional[Tensor] = None,
+ memory_key_padding_mask: Optional[Tensor] = None,
+ pos: Optional[Tensor] = None,
+ query_pos: Optional[Tensor] = None,
+ ):
+ if self.normalize_before:
+ return self.forward_pre(
+ tgt,
+ memory,
+ tgt_mask,
+ memory_mask,
+ tgt_key_padding_mask,
+ memory_key_padding_mask,
+ pos,
+ query_pos,
+ )
+ return self.forward_post(
+ tgt,
+ memory,
+ tgt_mask,
+ memory_mask,
+ tgt_key_padding_mask,
+ memory_key_padding_mask,
+ pos,
+ query_pos,
+ )
+
+
+def _get_clones(module, N):
+ return nn.ModuleList([copy.deepcopy(module) for i in range(N)])
+
+
+def _get_activation_fn(activation):
+ """Return an activation function given a string"""
+ if activation == "relu":
+ return F.relu
+ if activation == "gelu":
+ return F.gelu
+ if activation == "glu":
+ return F.glu
+ raise RuntimeError(f"activation should be relu/gelu, not {activation}.")
diff --git a/refer/LICENSE b/refer/LICENSE
new file mode 100644
index 0000000000000000000000000000000000000000..261eeb9e9f8b2b4b0d119366dda99c6fd7d35c64
--- /dev/null
+++ b/refer/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/refer/Makefile b/refer/Makefile
new file mode 100644
index 0000000000000000000000000000000000000000..7c6ea46ee209844ab6c85f4cd2202fe14a351115
--- /dev/null
+++ b/refer/Makefile
@@ -0,0 +1,6 @@
+all:
+ # install pycocotools/mask locally
+ # copy from https://github.com/pdollar/coco.git
+ python setup.py build_ext --inplace
+ rm -rf build
+
diff --git a/refer/README.md b/refer/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..abeae48bb0a561b5f9c3337cd98551ff759f2b1e
--- /dev/null
+++ b/refer/README.md
@@ -0,0 +1,55 @@
+## Note
+This API is able to load all 4 referring expression datasets, i.e., RefClef, RefCOCO, RefCOCO+ and RefCOCOg.
+They are with different train/val/test split by UNC, Google and UC Berkeley respectively. We provide all kinds of splits here.
+
+
+ |
+
+
+
+## Citation
+If you used the following three datasets RefClef, RefCOCO and RefCOCO+ that were collected by UNC, please consider cite our EMNLP2014 paper; if you want to compare with our recent results, please check our ECCV2016 paper.
+```bash
+Kazemzadeh, Sahar, et al. "ReferItGame: Referring to Objects in Photographs of Natural Scenes." EMNLP 2014.
+Yu, Licheng, et al. "Modeling Context in Referring Expressions." ECCV 2016.
+```
+
+## Setup
+Run "make" before using the code.
+It will generate ``_mask.c`` and ``_mask.so`` in ``external/`` folder.
+These mask-related codes are copied from mscoco [API](https://github.com/pdollar/coco).
+
+## Download
+Download the cleaned data and extract them into "data" folder
+- 1) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refclef.zip
+- 2) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco.zip
+- 3) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refcoco+.zip
+- 4) http://bvisionweb1.cs.unc.edu/licheng/referit/data/refcocog.zip
+
+## Prepare Images:
+Besides, add "mscoco" into the ``data/images`` folder, which can be from [mscoco](http://mscoco.org/dataset/#overview)
+COCO's images are used for RefCOCO, RefCOCO+ and refCOCOg.
+For RefCLEF, please add ``saiapr_tc-12`` into ``data/images`` folder. We extracted the related 19997 images to the cleaned RefCLEF dataset, which is a subset of the original [imageCLEF](http://imageclef.org/SIAPRdata). Download the [subset](http://bvisionweb1.cs.unc.edu/licheng/referit/data/images/saiapr_tc-12.zip) and unzip it to ``data/images/saiapr_tc-12``.
+
+## How to use
+The "refer.py" is able to load all 4 datasets with different kinds of data split by UNC, Google, UMD and UC Berkeley.
+**Note for RefCOCOg, we suggest use UMD's split which has train/val/test splits and there is no overlap of images between different split.**
+```bash
+# locate your own data_root, and choose the dataset_splitBy you want to use
+refer = REFER(data_root, dataset='refclef', splitBy='unc')
+refer = REFER(data_root, dataset='refclef', splitBy='berkeley') # 2 train and 1 test images missed
+refer = REFER(data_root, dataset='refcoco', splitBy='unc')
+refer = REFER(data_root, dataset='refcoco', splitBy='google')
+refer = REFER(data_root, dataset='refcoco+', splitBy='unc')
+refer = REFER(data_root, dataset='refcocog', splitBy='google') # test split not released yet
+refer = REFER(data_root, dataset='refcocog', splitBy='umd') # Recommended, including train/val/test
+```
+
+
+
diff --git a/refer/__pycache__/refer.cpython-37.pyc b/refer/__pycache__/refer.cpython-37.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e51c7355c8000e3a89ddd5199e711663cddfae59
Binary files /dev/null and b/refer/__pycache__/refer.cpython-37.pyc differ
diff --git a/refer/__pycache__/refer.cpython-38.pyc b/refer/__pycache__/refer.cpython-38.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..aae17e14a59bdb360cb1d4c580bfc3f8f2a1c7ee
Binary files /dev/null and b/refer/__pycache__/refer.cpython-38.pyc differ
diff --git a/refer/external/README.md b/refer/external/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..0a0a681c7c1aaf41cfc2ae73cbdbffe55437d210
--- /dev/null
+++ b/refer/external/README.md
@@ -0,0 +1 @@
+The codes inside this folder are copied from pycocotools: https://github.com/pdollar/coco
\ No newline at end of file
diff --git a/refer/external/__init__.py b/refer/external/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..3f7d85bba884ea8f83fc6ab2a1e6ade80d98d4d9
--- /dev/null
+++ b/refer/external/__init__.py
@@ -0,0 +1 @@
+__author__ = 'tylin'
diff --git a/refer/external/_mask.pyx b/refer/external/_mask.pyx
new file mode 100644
index 0000000000000000000000000000000000000000..9f0562c26bc45bc927e8ef65c1f60f1670c50be1
--- /dev/null
+++ b/refer/external/_mask.pyx
@@ -0,0 +1,291 @@
+# distutils: language = c
+# distutils: sources = external/maskApi.c
+
+#**************************************************************************
+# Microsoft COCO Toolbox. version 2.0
+# Data, paper, and tutorials available at: http://mscoco.org/
+# Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
+# Licensed under the Simplified BSD License [see coco/license.txt]
+#**************************************************************************
+
+__author__ = 'tsungyi'
+
+# import both Python-level and C-level symbols of Numpy
+# the API uses Numpy to interface C and Python
+import numpy as np
+cimport numpy as np
+from libc.stdlib cimport malloc, free
+
+# intialized Numpy. must do.
+np.import_array()
+
+# import numpy C function
+# we use PyArray_ENABLEFLAGS to make Numpy ndarray responsible to memoery management
+cdef extern from "numpy/arrayobject.h":
+ void PyArray_ENABLEFLAGS(np.ndarray arr, int flags)
+
+# Declare the prototype of the C functions in MaskApi.h
+cdef extern from "maskApi.h":
+ ctypedef unsigned int uint
+ ctypedef unsigned long siz
+ ctypedef unsigned char byte
+ ctypedef double* BB
+ ctypedef struct RLE:
+ siz h,
+ siz w,
+ siz m,
+ uint* cnts,
+ void rlesInit( RLE **R, siz n )
+ void rleEncode( RLE *R, const byte *M, siz h, siz w, siz n )
+ void rleDecode( const RLE *R, byte *mask, siz n )
+ void rleMerge( const RLE *R, RLE *M, siz n, bint intersect )
+ void rleArea( const RLE *R, siz n, uint *a )
+ void rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o )
+ void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o )
+ void rleToBbox( const RLE *R, BB bb, siz n )
+ void rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n )
+ void rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w )
+ char* rleToString( const RLE *R )
+ void rleFrString( RLE *R, char *s, siz h, siz w )
+
+# python class to wrap RLE array in C
+# the class handles the memory allocation and deallocation
+cdef class RLEs:
+ cdef RLE *_R
+ cdef siz _n
+
+ def __cinit__(self, siz n =0):
+ rlesInit(&self._R, n)
+ self._n = n
+
+ # free the RLE array here
+ def __dealloc__(self):
+ if self._R is not NULL:
+ for i in range(self._n):
+ free(self._R[i].cnts)
+ free(self._R)
+ def __getattr__(self, key):
+ if key == 'n':
+ return self._n
+ raise AttributeError(key)
+
+# python class to wrap Mask array in C
+# the class handles the memory allocation and deallocation
+cdef class Masks:
+ cdef byte *_mask
+ cdef siz _h
+ cdef siz _w
+ cdef siz _n
+
+ def __cinit__(self, h, w, n):
+ self._mask = malloc(h*w*n* sizeof(byte))
+ self._h = h
+ self._w = w
+ self._n = n
+ # def __dealloc__(self):
+ # the memory management of _mask has been passed to np.ndarray
+ # it doesn't need to be freed here
+
+ # called when passing into np.array() and return an np.ndarray in column-major order
+ def __array__(self):
+ cdef np.npy_intp shape[1]
+ shape[0] = self._h*self._w*self._n
+ # Create a 1D array, and reshape it to fortran/Matlab column-major array
+ ndarray = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT8, self._mask).reshape((self._h, self._w, self._n), order='F')
+ # The _mask allocated by Masks is now handled by ndarray
+ PyArray_ENABLEFLAGS(ndarray, np.NPY_OWNDATA)
+ return ndarray
+
+# internal conversion from Python RLEs object to compressed RLE format
+def _toString(RLEs Rs):
+ cdef siz n = Rs.n
+ cdef bytes py_string
+ cdef char* c_string
+ objs = []
+ for i in range(n):
+ c_string = rleToString( &Rs._R[i] )
+ py_string = c_string
+ objs.append({
+ 'size': [Rs._R[i].h, Rs._R[i].w],
+ 'counts': py_string
+ })
+ free(c_string)
+ return objs
+
+# internal conversion from compressed RLE format to Python RLEs object
+def _frString(rleObjs):
+ cdef siz n = len(rleObjs)
+ Rs = RLEs(n)
+ cdef bytes py_string
+ cdef char* c_string
+ for i, obj in enumerate(rleObjs):
+ py_string = str(obj['counts'])
+ c_string = py_string
+ rleFrString( &Rs._R[i], c_string, obj['size'][0], obj['size'][1] )
+ return Rs
+
+# encode mask to RLEs objects
+# list of RLE string can be generated by RLEs member function
+def encode(np.ndarray[np.uint8_t, ndim=3, mode='fortran'] mask):
+ h, w, n = mask.shape[0], mask.shape[1], mask.shape[2]
+ cdef RLEs Rs = RLEs(n)
+ rleEncode(Rs._R,mask.data,h,w,n)
+ objs = _toString(Rs)
+ return objs
+
+# decode mask from compressed list of RLE string or RLEs object
+def decode(rleObjs):
+ cdef RLEs Rs = _frString(rleObjs)
+ h, w, n = Rs._R[0].h, Rs._R[0].w, Rs._n
+ masks = Masks(h, w, n)
+ rleDecode( Rs._R, masks._mask, n );
+ return np.array(masks)
+
+def merge(rleObjs, bint intersect=0):
+ cdef RLEs Rs = _frString(rleObjs)
+ cdef RLEs R = RLEs(1)
+ rleMerge(Rs._R, R._R, Rs._n, intersect)
+ obj = _toString(R)[0]
+ return obj
+
+def area(rleObjs):
+ cdef RLEs Rs = _frString(rleObjs)
+ cdef uint* _a = malloc(Rs._n* sizeof(uint))
+ rleArea(Rs._R, Rs._n, _a)
+ cdef np.npy_intp shape[1]
+ shape[0] = Rs._n
+ a = np.array((Rs._n, ), dtype=np.uint8)
+ a = np.PyArray_SimpleNewFromData(1, shape, np.NPY_UINT32, _a)
+ PyArray_ENABLEFLAGS(a, np.NPY_OWNDATA)
+ return a
+
+# iou computation. support function overload (RLEs-RLEs and bbox-bbox).
+def iou( dt, gt, pyiscrowd ):
+ def _preproc(objs):
+ if len(objs) == 0:
+ return objs
+ if type(objs) == np.ndarray:
+ if len(objs.shape) == 1:
+ objs = objs.reshape((objs[0], 1))
+ # check if it's Nx4 bbox
+ if not len(objs.shape) == 2 or not objs.shape[1] == 4:
+ raise Exception('numpy ndarray input is only for *bounding boxes* and should have Nx4 dimension')
+ objs = objs.astype(np.double)
+ elif type(objs) == list:
+ # check if list is in box format and convert it to np.ndarray
+ isbox = np.all(np.array([(len(obj)==4) and ((type(obj)==list) or (type(obj)==np.ndarray)) for obj in objs]))
+ isrle = np.all(np.array([type(obj) == dict for obj in objs]))
+ if isbox:
+ objs = np.array(objs, dtype=np.double)
+ if len(objs.shape) == 1:
+ objs = objs.reshape((1,objs.shape[0]))
+ elif isrle:
+ objs = _frString(objs)
+ else:
+ raise Exception('list input can be bounding box (Nx4) or RLEs ([RLE])')
+ else:
+ raise Exception('unrecognized type. The following type: RLEs (rle), np.ndarray (box), and list (box) are supported.')
+ return objs
+ def _rleIou(RLEs dt, RLEs gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ rleIou( dt._R, gt._R, m, n, iscrowd.data, _iou.data )
+ def _bbIou(np.ndarray[np.double_t, ndim=2] dt, np.ndarray[np.double_t, ndim=2] gt, np.ndarray[np.uint8_t, ndim=1] iscrowd, siz m, siz n, np.ndarray[np.double_t, ndim=1] _iou):
+ bbIou( dt.data, gt.data, m, n, iscrowd.data, _iou.data )
+ def _len(obj):
+ cdef siz N = 0
+ if type(obj) == RLEs:
+ N = obj.n
+ elif len(obj)==0:
+ pass
+ elif type(obj) == np.ndarray:
+ N = obj.shape[0]
+ return N
+ # convert iscrowd to numpy array
+ cdef np.ndarray[np.uint8_t, ndim=1] iscrowd = np.array(pyiscrowd, dtype=np.uint8)
+ # simple type checking
+ cdef siz m, n
+ dt = _preproc(dt)
+ gt = _preproc(gt)
+ m = _len(dt)
+ n = _len(gt)
+ if m == 0 or n == 0:
+ return []
+ if not type(dt) == type(gt):
+ raise Exception('The dt and gt should have the same data type, either RLEs, list or np.ndarray')
+
+ # define local variables
+ cdef double* _iou = 0
+ cdef np.npy_intp shape[1]
+ # check type and assign iou function
+ if type(dt) == RLEs:
+ _iouFun = _rleIou
+ elif type(dt) == np.ndarray:
+ _iouFun = _bbIou
+ else:
+ raise Exception('input data type not allowed.')
+ _iou = malloc(m*n* sizeof(double))
+ iou = np.zeros((m*n, ), dtype=np.double)
+ shape[0] = m*n
+ iou = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _iou)
+ PyArray_ENABLEFLAGS(iou, np.NPY_OWNDATA)
+ _iouFun(dt, gt, iscrowd, m, n, iou)
+ return iou.reshape((m,n), order='F')
+
+def toBbox( rleObjs ):
+ cdef RLEs Rs = _frString(rleObjs)
+ cdef siz n = Rs.n
+ cdef BB _bb = malloc(4*n* sizeof(double))
+ rleToBbox( Rs._R, _bb, n )
+ cdef np.npy_intp shape[1]
+ shape[0] = 4*n
+ bb = np.array((1,4*n), dtype=np.double)
+ bb = np.PyArray_SimpleNewFromData(1, shape, np.NPY_DOUBLE, _bb).reshape((n, 4))
+ PyArray_ENABLEFLAGS(bb, np.NPY_OWNDATA)
+ return bb
+
+def frBbox(np.ndarray[np.double_t, ndim=2] bb, siz h, siz w ):
+ cdef siz n = bb.shape[0]
+ Rs = RLEs(n)
+ rleFrBbox( Rs._R, bb.data, h, w, n )
+ objs = _toString(Rs)
+ return objs
+
+def frPoly( poly, siz h, siz w ):
+ cdef np.ndarray[np.double_t, ndim=1] np_poly
+ n = len(poly)
+ Rs = RLEs(n)
+ for i, p in enumerate(poly):
+ np_poly = np.array(p, dtype=np.double, order='F')
+ rleFrPoly( &Rs._R[i], np_poly.data, len(np_poly)/2, h, w )
+ objs = _toString(Rs)
+ return objs
+
+def frUncompressedRLE(ucRles, siz h, siz w):
+ cdef np.ndarray[np.uint32_t, ndim=1] cnts
+ cdef RLE R
+ cdef uint *data
+ n = len(ucRles)
+ objs = []
+ for i in range(n):
+ Rs = RLEs(1)
+ cnts = np.array(ucRles[i]['counts'], dtype=np.uint32)
+ # time for malloc can be saved here but it's fine
+ data = malloc(len(cnts)* sizeof(uint))
+ for j in range(len(cnts)):
+ data[j] = cnts[j]
+ R = RLE(ucRles[i]['size'][0], ucRles[i]['size'][1], len(cnts), data)
+ Rs._R[0] = R
+ objs.append(_toString(Rs)[0])
+ return objs
+
+def frPyObjects(pyobj, siz h, w):
+ if type(pyobj) == np.ndarray:
+ objs = frBbox(pyobj, h, w )
+ elif type(pyobj) == list and len(pyobj[0]) == 4:
+ objs = frBbox(pyobj, h, w )
+ elif type(pyobj) == list and len(pyobj[0]) > 4:
+ objs = frPoly(pyobj, h, w )
+ elif type(pyobj) == list and type(pyobj[0]) == dict:
+ objs = frUncompressedRLE(pyobj, h, w)
+ else:
+ raise Exception('input type is not supported.')
+ return objs
diff --git a/refer/external/mask.py b/refer/external/mask.py
new file mode 100644
index 0000000000000000000000000000000000000000..5462c341d99d02500b0e0abe6418bde5b5838b0c
--- /dev/null
+++ b/refer/external/mask.py
@@ -0,0 +1,82 @@
+__author__ = 'tsungyi'
+
+import external._mask as _mask
+
+# Interface for manipulating masks stored in RLE format.
+#
+# RLE is a simple yet efficient format for storing binary masks. RLE
+# first divides a vector (or vectorized image) into a series of piecewise
+# constant regions and then for each piece simply stores the length of
+# that piece. For example, given M=[0 0 1 1 1 0 1] the RLE counts would
+# be [2 3 1 1], or for M=[1 1 1 1 1 1 0] the counts would be [0 6 1]
+# (note that the odd counts are always the numbers of zeros). Instead of
+# storing the counts directly, additional compression is achieved with a
+# variable bitrate representation based on a common scheme called LEB128.
+#
+# Compression is greatest given large piecewise constant regions.
+# Specifically, the size of the RLE is proportional to the number of
+# *boundaries* in M (or for an image the number of boundaries in the y
+# direction). Assuming fairly simple shapes, the RLE representation is
+# O(sqrt(n)) where n is number of pixels in the object. Hence space usage
+# is substantially lower, especially for large simple objects (large n).
+#
+# Many common operations on masks can be computed directly using the RLE
+# (without need for decoding). This includes computations such as area,
+# union, intersection, etc. All of these operations are linear in the
+# size of the RLE, in other words they are O(sqrt(n)) where n is the area
+# of the object. Computing these operations on the original mask is O(n).
+# Thus, using the RLE can result in substantial computational savings.
+#
+# The following API functions are defined:
+# encode - Encode binary masks using RLE.
+# decode - Decode binary masks encoded via RLE.
+# merge - Compute union or intersection of encoded masks.
+# iou - Compute intersection over union between masks.
+# area - Compute area of encoded masks.
+# toBbox - Get bounding boxes surrounding encoded masks.
+# frPyObjects - Convert polygon, bbox, and uncompressed RLE to encoded RLE mask.
+#
+# Usage:
+# Rs = encode( masks )
+# masks = decode( Rs )
+# R = merge( Rs, intersect=false )
+# o = iou( dt, gt, iscrowd )
+# a = area( Rs )
+# bbs = toBbox( Rs )
+# Rs = frPyObjects( [pyObjects], h, w )
+#
+# In the API the following formats are used:
+# Rs - [dict] Run-length encoding of binary masks
+# R - dict Run-length encoding of binary mask
+# masks - [hxwxn] Binary mask(s) (must have type np.ndarray(dtype=uint8) in column-major order)
+# iscrowd - [nx1] list of np.ndarray. 1 indicates corresponding gt image has crowd region to ignore
+# bbs - [nx4] Bounding box(es) stored as [x y w h]
+# poly - Polygon stored as [[x1 y1 x2 y2...],[x1 y1 ...],...] (2D list)
+# dt,gt - May be either bounding boxes or encoded masks
+# Both poly and bbs are 0-indexed (bbox=[0 0 1 1] encloses first pixel).
+#
+# Finally, a note about the intersection over union (iou) computation.
+# The standard iou of a ground truth (gt) and detected (dt) object is
+# iou(gt,dt) = area(intersect(gt,dt)) / area(union(gt,dt))
+# For "crowd" regions, we use a modified criteria. If a gt object is
+# marked as "iscrowd", we allow a dt to match any subregion of the gt.
+# Choosing gt' in the crowd gt that best matches the dt can be done using
+# gt'=intersect(dt,gt). Since by definition union(gt',dt)=dt, computing
+# iou(gt,dt,iscrowd) = iou(gt',dt) = area(intersect(gt,dt)) / area(dt)
+# For crowd gt regions we use this modified criteria above for the iou.
+#
+# To compile run "python setup.py build_ext --inplace"
+# Please do not contact us for help with compiling.
+#
+# Microsoft COCO Toolbox. version 2.0
+# Data, paper, and tutorials available at: http://mscoco.org/
+# Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
+# Licensed under the Simplified BSD License [see coco/license.txt]
+
+encode = _mask.encode
+decode = _mask.decode
+iou = _mask.iou
+merge = _mask.merge
+area = _mask.area
+toBbox = _mask.toBbox
+frPyObjects = _mask.frPyObjects
\ No newline at end of file
diff --git a/refer/external/maskApi.c b/refer/external/maskApi.c
new file mode 100644
index 0000000000000000000000000000000000000000..85e397918278126ce11f225dc109efbeb8a9394f
--- /dev/null
+++ b/refer/external/maskApi.c
@@ -0,0 +1,230 @@
+/**************************************************************************
+* Microsoft COCO Toolbox. version 2.0
+* Data, paper, and tutorials available at: http://mscoco.org/
+* Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
+* Licensed under the Simplified BSD License [see coco/license.txt]
+**************************************************************************/
+#include "maskApi.h"
+#include
+#include
+
+uint umin( uint a, uint b ) { return (ab) ? a : b; }
+
+void rleInit( RLE *R, siz h, siz w, siz m, uint *cnts ) {
+ R->h=h; R->w=w; R->m=m; R->cnts=(m==0)?0:malloc(sizeof(uint)*m);
+ siz j; if(cnts) for(j=0; jcnts[j]=cnts[j];
+}
+
+void rleFree( RLE *R ) {
+ free(R->cnts); R->cnts=0;
+}
+
+void rlesInit( RLE **R, siz n ) {
+ siz i; *R = (RLE*) malloc(sizeof(RLE)*n);
+ for(i=0; i0 ) {
+ c=umin(ca,cb); cc+=c; ct=0;
+ ca-=c; if(!ca && a0) {
+ crowd=iscrowd!=NULL && iscrowd[g];
+ if(dt[d].h!=gt[g].h || dt[d].w!=gt[g].w) { o[g*m+d]=-1; continue; }
+ siz ka, kb, a, b; uint c, ca, cb, ct, i, u; int va, vb;
+ ca=dt[d].cnts[0]; ka=dt[d].m; va=vb=0;
+ cb=gt[g].cnts[0]; kb=gt[g].m; a=b=1; i=u=0; ct=1;
+ while( ct>0 ) {
+ c=umin(ca,cb); if(va||vb) { u+=c; if(va&&vb) i+=c; } ct=0;
+ ca-=c; if(!ca && athr) keep[j]=0;
+ }
+ }
+}
+
+void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o ) {
+ double h, w, i, u, ga, da; siz g, d; int crowd;
+ for( g=0; gthr) keep[j]=0;
+ }
+ }
+}
+
+void rleToBbox( const RLE *R, BB bb, siz n ) {
+ siz i; for( i=0; id?1:c=dy && xs>xe) || (dxye);
+ if(flip) { t=xs; xs=xe; xe=t; t=ys; ys=ye; ye=t; }
+ s = dx>=dy ? (double)(ye-ys)/dx : (double)(xe-xs)/dy;
+ if(dx>=dy) for( d=0; d<=dx; d++ ) {
+ t=flip?dx-d:d; u[m]=t+xs; v[m]=(int)(ys+s*t+.5); m++;
+ } else for( d=0; d<=dy; d++ ) {
+ t=flip?dy-d:d; v[m]=t+ys; u[m]=(int)(xs+s*t+.5); m++;
+ }
+ }
+ /* get points along y-boundary and downsample */
+ free(x); free(y); k=m; m=0; double xd, yd;
+ x=malloc(sizeof(int)*k); y=malloc(sizeof(int)*k);
+ for( j=1; jw-1 ) continue;
+ yd=(double)(v[j]h) yd=h; yd=ceil(yd);
+ x[m]=(int) xd; y[m]=(int) yd; m++;
+ }
+ /* compute rle encoding given y-boundary points */
+ k=m; a=malloc(sizeof(uint)*(k+1));
+ for( j=0; j0) b[m++]=a[j++]; else {
+ j++; if(jm, p=0; long x; int more;
+ char *s=malloc(sizeof(char)*m*6);
+ for( i=0; icnts[i]; if(i>2) x-=(long) R->cnts[i-2]; more=1;
+ while( more ) {
+ char c=x & 0x1f; x >>= 5; more=(c & 0x10) ? x!=-1 : x!=0;
+ if(more) c |= 0x20; c+=48; s[p++]=c;
+ }
+ }
+ s[p]=0; return s;
+}
+
+void rleFrString( RLE *R, char *s, siz h, siz w ) {
+ siz m=0, p=0, k; long x; int more; uint *cnts;
+ while( s[m] ) m++; cnts=malloc(sizeof(uint)*m); m=0;
+ while( s[p] ) {
+ x=0; k=0; more=1;
+ while( more ) {
+ char c=s[p]-48; x |= (c & 0x1f) << 5*k;
+ more = c & 0x20; p++; k++;
+ if(!more && (c & 0x10)) x |= -1 << 5*k;
+ }
+ if(m>2) x+=(long) cnts[m-2]; cnts[m++]=(uint) x;
+ }
+ rleInit(R,h,w,m,cnts); free(cnts);
+}
diff --git a/refer/external/maskApi.h b/refer/external/maskApi.h
new file mode 100644
index 0000000000000000000000000000000000000000..ebc7892da38289b459d6be824e1f849878bd4069
--- /dev/null
+++ b/refer/external/maskApi.h
@@ -0,0 +1,60 @@
+/**************************************************************************
+* Microsoft COCO Toolbox. version 2.0
+* Data, paper, and tutorials available at: http://mscoco.org/
+* Code written by Piotr Dollar and Tsung-Yi Lin, 2015.
+* Licensed under the Simplified BSD License [see coco/license.txt]
+**************************************************************************/
+#pragma once
+
+typedef unsigned int uint;
+typedef unsigned long siz;
+typedef unsigned char byte;
+typedef double* BB;
+typedef struct { siz h, w, m; uint *cnts; } RLE;
+
+/* Initialize/destroy RLE. */
+void rleInit( RLE *R, siz h, siz w, siz m, uint *cnts );
+void rleFree( RLE *R );
+
+/* Initialize/destroy RLE array. */
+void rlesInit( RLE **R, siz n );
+void rlesFree( RLE **R, siz n );
+
+/* Encode binary masks using RLE. */
+void rleEncode( RLE *R, const byte *mask, siz h, siz w, siz n );
+
+/* Decode binary masks encoded via RLE. */
+void rleDecode( const RLE *R, byte *mask, siz n );
+
+/* Compute union or intersection of encoded masks. */
+void rleMerge( const RLE *R, RLE *M, siz n, int intersect );
+
+/* Compute area of encoded masks. */
+void rleArea( const RLE *R, siz n, uint *a );
+
+/* Compute intersection over union between masks. */
+void rleIou( RLE *dt, RLE *gt, siz m, siz n, byte *iscrowd, double *o );
+
+/* Compute non-maximum suppression between bounding masks */
+void rleNms( RLE *dt, siz n, uint *keep, double thr );
+
+/* Compute intersection over union between bounding boxes. */
+void bbIou( BB dt, BB gt, siz m, siz n, byte *iscrowd, double *o );
+
+/* Compute non-maximum suppression between bounding boxes */
+void bbNms( BB dt, siz n, uint *keep, double thr );
+
+/* Get bounding boxes surrounding encoded masks. */
+void rleToBbox( const RLE *R, BB bb, siz n );
+
+/* Convert bounding boxes to encoded masks. */
+void rleFrBbox( RLE *R, const BB bb, siz h, siz w, siz n );
+
+/* Convert polygon to encoded mask. */
+void rleFrPoly( RLE *R, const double *xy, siz k, siz h, siz w );
+
+/* Get compressed string representation of encoded mask. */
+char* rleToString( const RLE *R );
+
+/* Convert from compressed string representation of encoded mask. */
+void rleFrString( RLE *R, char *s, siz h, siz w );
diff --git a/refer/pyEvalDemo.ipynb b/refer/pyEvalDemo.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..54936371ebd0f3c96a172d3c684bcc0d2fdbfa55
--- /dev/null
+++ b/refer/pyEvalDemo.ipynb
@@ -0,0 +1,308 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "%matplotlib inline\n",
+ "from refer import REFER\n",
+ "import numpy as np\n",
+ "import sys\n",
+ "import os.path as osp\n",
+ "import json\n",
+ "import matplotlib.pyplot as plt\n",
+ "from matplotlib.patches import Rectangle"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loading dataset refcoco into memory...\n",
+ "creating index...\n",
+ "index created.\n",
+ "DONE (t=9.47s)\n"
+ ]
+ }
+ ],
+ "source": [
+ "data_root = './data' # contains refclef, refcoco, refcoco+, refcocog and images\n",
+ "dataset = 'refcoco'\n",
+ "splitBy = 'unc'\n",
+ "refer = REFER(data_root, dataset, splitBy)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 1. Evaluate Refering Expressions by Language Metrics"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "sys.path.insert(0, './evaluation')\n",
+ "from refEvaluation import RefEvaluation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{u'sent': u'man in black', u'ref_id': 47}\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Here's our example expression file\n",
+ "sample_expr_file = json.load(open('test/sample_expressions_testA.json', 'r'))\n",
+ "sample_exprs = sample_expr_file['predictions']\n",
+ "print sample_exprs[0]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "tokenization...\n",
+ "setting up scorers...\n",
+ "computing Bleu score...\n",
+ "{'reflen': 5356, 'guess': [5009, 3034, 1477, 275], 'testlen': 5009, 'correct': [2576, 580, 112, 2]}\n",
+ "ratio: 0.935212845407\n",
+ "Bleu_1: 0.480\n",
+ "Bleu_2: 0.293\n",
+ "Bleu_3: 0.182\n",
+ "Bleu_4: 0.080\n",
+ "computing METEOR score...\n",
+ "METEOR: 0.172\n",
+ "computing Rouge score...\n",
+ "ROUGE_L: 0.414\n",
+ "computing CIDEr score...\n",
+ "CIDEr: 0.669\n"
+ ]
+ }
+ ],
+ "source": [
+ "refEval = RefEvaluation(refer, sample_exprs)\n",
+ "refEval.evaluate()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# 2. Evaluate Referring Expressions by Duplicate Rate"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "108/750 (14.40%) images have duplicate predicted sentences.\n"
+ ]
+ }
+ ],
+ "source": [
+ "# evalue how many images contain duplicate expressions\n",
+ "pred_refToSent = {int(it['ref_id']): it['sent'] for it in sample_exprs}\n",
+ "pred_imgToSents = {}\n",
+ "for ref_id, pred_sent in pred_refToSent.items():\n",
+ " image_id = refer.Refs[ref_id]['image_id']\n",
+ " pred_imgToSents[image_id] = pred_imgToSents.get(image_id, []) + [pred_sent]\n",
+ "# count duplicate\n",
+ "duplicate = 0\n",
+ "for image_id, sents in pred_imgToSents.items():\n",
+ " if len(set(sents)) < len(sents):\n",
+ " duplicate += 1\n",
+ "ratio = duplicate*100.0 / len(pred_imgToSents)\n",
+ "print '%s/%s (%.2f%%) images have duplicate predicted sentences.' % (duplicate, len(pred_imgToSents), ratio)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "collapsed": true
+ },
+ "source": [
+ "# 3.Evaluate Referring Comprehension"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 49,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": [
+ "# IoU function\n",
+ "def computeIoU(box1, box2):\n",
+ " # each box is of [x1, y1, w, h]\n",
+ " inter_x1 = max(box1[0], box2[0])\n",
+ " inter_y1 = max(box1[1], box2[1])\n",
+ " inter_x2 = min(box1[0]+box1[2]-1, box2[0]+box2[2]-1)\n",
+ " inter_y2 = min(box1[1]+box1[3]-1, box2[1]+box2[3]-1)\n",
+ "\n",
+ " if inter_x1 < inter_x2 and inter_y1 < inter_y2:\n",
+ " inter = (inter_x2-inter_x1+1)*(inter_y2-inter_y1+1)\n",
+ " else:\n",
+ " inter = 0\n",
+ " union = box1[2]*box1[3] + box2[2]*box2[3] - inter\n",
+ " return float(inter)/union"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 41,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "# randomly sample one ref\n",
+ "ref_ids = refer.getRefIds()\n",
+ "ref_id = ref_ids[np.random.randint(0, len(ref_ids))]\n",
+ "ref = refer.Refs[ref_id]\n",
+ "\n",
+ "# let's fake one bounding box by randomly picking one instance inside this image\n",
+ "image_id = ref['image_id']\n",
+ "anns = refer.imgToAnns[image_id]\n",
+ "ann = anns[np.random.randint(0, len(anns))]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 42,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "1. person bending\n",
+ "2. man\n",
+ "3. the person bending over\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAMsAAAEACAYAAAAdo4LwAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzsvXe0Jdd13vnb55yqG19+nbvR3WjEBkGCJADShASAECmR\nw2EwFShqFExzybJspbHWWLIVCMmUrJG1Zlkyh9J4lCzJosIwSCLFKGYQIEDERmoADXSOL793U9U5\ne88fdV/3a7AbxJAEW57V31pvvXvr1j2hzv7OjlVXzIyLuIiL+NpwF3oAF3ER/6PgIlku4iKeJy6S\n5SIu4nniIlku4iKeJy6S5SIu4nniIlku4iKeJ14QsojI60TkcRF5UkR+7oXo4yIu4lsN+WbnWUTE\nA3uB1wBHgHuAt5vZY9/Uji7iIr7FeCE0y43AU2a238xK4C+AN78A/VzERXxL8UKQZQtwaM37w8Nj\nF3ER/0PjhSDLxfqZi/j/JcIL0OYRYNua99uotMtpiMhFQl3EP2qYmTz72AtBlq8Al4vIDuAo8Dbg\n7c8+qRgs0ul00LIPtoIrC4qiR54FEjm+3iJkAcOQZNTaE1hMqBgiGWYJZwUpFZhvoEWXmEpCyElJ\nqWWKmUPNyBvTSBjBiavUnhnOVdfiV27/Vd51+y8jSPWZgOkZLotU55kBWqJmIAlUEAk4H0CMg5/5\n98z7dWzY8R1s2n4d5wqciAhmdvozwTBApFLwt99+O+9617uG/Q2/nwoUo1LYgrgMcYIlATFMIyYG\nJnifVS2mznA+HmEAlqHiEZeBJSADSYh4rFxCrMQsIc4Tiz6qBf/xt/6AX/6Ff4dYQRRP8AlNhrgc\nfIZJwJmjM/8IPoxTH92GOBAzEHfOeT/XsTPX+auP3X777dx+++1fdT2fL05f7zV9rL4+F8732Ted\nLGYWReQngI8DHviDc0XCThy8n1DzpAGk1MNJTq3WJPmAeEfeHEGKPtFB3mhSdFcIeQ0d9Ch78yhQ\nr49RDApqDZC8Sc07ykGXeqNFjAXiPJZ6OC2qC+QETBEnCAJyWlwrEiCIgaoOF1NwLmEmiICKIuJB\nMkwMw2GW0KHYqwxQLajiGgEzELGqPe+QoQiLc8MFc0MWnlkgEYeZDsks4Go4q87HJcQUcGgAh8ec\nAxQdEtyJgzAyXAwQaqglvAkaF/E+JyJ4UyjmSeIJaqhvQMgI2QSS+lWbaQkTw5Fj0RASkgy0QFA0\ntGhNXQsuQpJqLrI6N8AcSMLMnznGVwviWuF8LiH+evHsNr/ePl4IzYKZfRT46HOekyJWgHMBtZzW\n6CSaIv3eCidnjpFq93Hp1G40ePrFCVq1Nr1BQb01hW9OkGmiKAoaY6OgELJ2tbs2xxCBkEfUBOlE\n+p1T5KNNkkIyTx5qqDgwHY5XQRNJCwzDSYa4SmMkNbwYKfYRhKQd0IS4DB8CaobzGV48QRxlfwnT\nhGoCSjA/1BwOxWHigYTIqnOnYP7MddEEQwqDqxZWjJIBg9nDlGXJ1KZdOM0wEs6FIanP7Jaq1bxE\nIKUB3nkMwflxlD5OlzDXgHwM53JME14MNUXNY1kNE8G5gOTjIILooKK6r4FkpDTAiWAKTsJpDqzu\n4maxIrBFkAwhw1yJEBCR0xvS2u9UY35+gnw+TfVCkG0VLwhZng8ETz20iNojZBmmCSuVkVabkekX\nE2dO4AQkJcTl9Is+pjDwK5gELCn1VhMtE+J9tTCaMIvE1Me7Bi4D35jCM8788UcR7TM6eQnLiz1w\nRp7lvOqGXdjgOCkpztdIBiIFmgKSVsDVOH74Pg7vvZ/lfsHLb3g97emNlIMuQk5SRQpX7cACmjqU\nxQouFZX2MU/mM8Q5BoMePmsgHlLZBQwVT5aNYb7GLbfciohVekocZpVQGEqxfJhP/fGvMDbZZqrd\nYmL3brbt/mdDjVIJyarAOLdqBhnO1StNaq4ilzVRV0Mw1AWCCUkcCjjvKjPU4JZbX4urr0MVnFQa\nWX2G6grBRsHnOFXMS0UUAYdbI8QB5wUsVB9ipN4xyKZxvlbpc1vVppVEDE87PfZKMwu33HLLOcza\n4VZjVpnXpsNzzh2zqq6Nssrqr4dU3/Sk5PPqVMROHbwLMc/yyiKj49N47yv/QgJzeogRvxG0QYrL\npDig1WqhOFQFjyPU27i8AUkItQb4xtB88dV/KjPKKFFLMFjAhxaat6uFt4TXPiggA9TqqPbwrknR\nXyCr1QlZDZVWZT7pMqXVCLFLigNMPL7ZIuLJXM6Bz/4CK2GM+tR17Lz626HXw4kw6PcxHRC8Yq6N\nr7UhBJxVBKp0yKom8OfcMSs/qdI3zoykggzmoDGKG+53Is8d2Fzddc/V/vAMVNf6UWebRme+lzAL\niNhZO/nZ51SMUysRyTFLw+MKbqjRdUjeIDjLSBorDTjUVliPWCwQ8glMapX56Yemp62OV09vDKu+\noIiACckpEnuIZDhfQygxq6wFGxrEkFgl1znm+y1x8J8ftNrFxiY20Gg1sFSw75HPcenltzFYSrTG\nAo16AMag7NMvIxKEWl4ny2tojKRYoiJovyQ0BOc84HGSUM0wFLGAahfnwYqCMh4hz2s4a2K+gVmB\nd5MEiWAZyaAxshFD0OFupVpggxUyt0SpAZ+PonGWsldCFI7N7KcO9CXD95dYPPYYeZggtKbwo9NI\nLHFkuExI+EoYnIAoppXvIny1IA+ZDJZwEiozsjuDNaaYO3YHU9vfiDn7mkQBzhLqc2FV0IRVn82f\n09Y386fHtfbzlNKwnYQTIPUpyxWybBQJzeG5jhTLSoOVp0ipx2DQpzEyjWqOD0qMPXxoksyR1cfQ\nskBYwhEoBwXO19EwAuZPB2nMKq1RFY8oRoLeDNKYQsyTrESSIMwi1EgIIjlqESc5IoL3nq+FC6dZ\nDt1LrdXCopF5jw8tkvY41jvBhKuh/RLEk9cyup0VGqPTOBcIvonkTUpN1EOdbu8kkiJZyPF5C5MR\nHAOQPuJzxI1jRJxYpWHUwBIiAdMCT45lgWSCc4IM5TOlAd5nqCRSOYDBAJdlpHIJ7+ogGfgM53Mw\nOH7HrzDvR5ja8Z2sW78bWY1MWcK5jKQKUuJ1QCyWkfp6LPYQn+NdrdItIqCKUFD2FihjF3Ft6u0J\nhAxBqsiYJsQi+GwoIKev63mv+VrNcv7zViN15yaWmVUjELd6xlkEjzEOBdgqn8+GWsoBlrDYAUYx\nV+KkEnKcw+Eq/1IS2DAqqYbpIk4HSEqo1XBZjlFASiQJBCckLcEUT06yHhoLXMjxoY66Ng6txmFL\n4CcwV6sCH8/SiKtm7Jpj/3g0S1n00ZRINqDZnKTbOUg+kkNnCTe5k7F1uxgU8+TUCM1lxClOBe8c\n/d4cghK1QbMxSkqOLG+hqSDkgWQOxwgqBam/QOrPYH6MWnuU5c4itWYDLQoa9SYJwVIXqFEM5rHS\nEJ8RPGhpiMtwGnH1EcxnhNpoZeIB6NDcEyPiEXIyqWFD29+JH0aCCpyBOo+5FpLXQBPeBpi0MRxu\n1V53HsjIWxvJzcBVglmtneEQJGRA9rwc2rXCbJYwey4NI1+zvbIckOX10xHCtU66935oDoGUC1VU\nUAK4OqIOsxJhAUkRcQ4plqC+ASv7uHxsSBipfCQPhFFMDdUSYh8bzGJxgFoJPqdQR57VMBXIwOV1\npL5pGL6OVWRO82rjlA3Da1n5RmvNxrVEeS5cMLLUvCelgthfhizQ1wOMFtcytu4SijKxfOIBYhin\nE/u4WJA3x3Akohmdzhy13BP7iZhN4BstUlniLNKZPUAyxWcNGrVRAgZZQKxH0V1Blpc59Phhlk7s\n57IXX0VobSDUxvB5m7w2grpU+TnqcT4naUkIo1WmQ8FJrIIBrgoqmBMcShCl21lCULwaiYj5DFSB\nWJHHBCyrzPayg4VRxBmmIFL5UU48MXVxvl3lLYBKiKv/q3i+kZ+156UU8b7G/9cii7WEC1mtcvrd\nuUy0ijCqCmGMVY9MFMQZzsaHmwFgjhQmEFEIeXUNeJbgmlRazAnUmlX0TgSnEVwYjm0AZBgBXOWN\nVNPLcAj4BDSHPtZq219tRj4fXDCytKYvRZwyrtVOOXd4kb7v02hfTd0y6qKVr+BreBmg/ZJB5xTO\ne6bW7aToLpPXc8ro8C7D+xoS2jTz9eArf8CZI6knEMEZvojkmy6hvemaoRBZZZ7FAS52WFlYoFZv\nYtQRAScJGyzSjccwLQlZjrkGWQhoMgZxgbLfx+X1ypyo5WjRoaSsciPJYak7TCQObWTnsNglRUNc\nF08LBFTD6cULfnSNeQA2DABw2jF9/tGcted5n7MaOft6UH3Pcz6yrbbrnDttzq1yfDXhe+Zkw1OF\n1GWonZ89XiQgZlW+zAzEV0Lvwhrzs3Hm/CEZ7Kw5rjVTV199fVVeF8xnUY2IOdRFsIBYRNFq93Zh\nmES0Kk/hZKjSMxyQ0jBGLw7TCE6GO4dWhopWGfpkVQ47pj6WIiFrVlnsZ4+HYTZ8jb1uGE5BhWEy\nM0IqqkibhEprmGE6QMg48eVf5WCvyWVXv5HRVgsak5C3wBwOxazEWSARcJQYWUVoBGU1N6k4V4Va\nzzivaTi31YTl1x/6/EbwQucwLnS/54jsfVWnF+xOyZQiRbGCxUQqC0wFL/lqnpBO7zhVaM8QIs6y\nSi2bVDuMSziJdOYPYEkRg4WZwwQE+ouUcVCpYRxiUgm35MDZYcaKdMPgrQyTgDbMhPsqd4AKQga+\nhYQRzNcR81U5ia+Bc6jWafpAY/IS3Mgu0mAZLFSJO/M41wJfmQq4MIyABcqkVPkQhmFvq0wy1dPX\nypQqd6nxWT7It26je74m39pynm9Vv98sqOpzjv2CkaU7v5/F7gN0e0/TW3ycxeJv0V6f+dknEVNC\n7TGwDDMlkg2TWAAJ7wwsYOYZmboU7wPgmZzeWWWqm1NkoTkkluB8i+Dy02rYObcmcTc8dtrAGWqS\n4VHcMJpTNTX8M3CCcw4hI+Eo44AUE7VsBHFGPrINL4LghmHJKmHmhqaH84K4Ku2g5SJiCaWPSEFv\n+SCa+qSkWOyhRXeYb6naUdWvubDfKM7V9pl8yXN9L73gY3shsLpxPhc5LxhZ6u0pWmwjyAhH9D7K\n8gqWuseZn3+GlcEd1P2rMesiGEVvmTINSNqjCleeaefZk1xrqpyxnZWZmUdOF0h+44t5JhFW9Vll\nowutckHnHpec85jzAZ9Pgsvw5IjUqI9sxYdGlc0PLaTWJlnCiqXTWe21Gfu1Y3n2vNbu9ufa+Z9L\nG5w5rpglBoMecCan8tWQodl47k9faHJ/o+1/LS12wRx8743oEpKP0HLreerJRymeOcDxmcOcmD7G\n/J9/lO9804+QT+2iVq+jAmlQUsYu3nu8y3HOna4xOlMXFSnLLrW8QRkV7zMQmJq+fJhYPmObfqP2\nsIhU+QBLqBpJ4/P7ztr3p1VWqnI3ppXJhwG+cvCtMt3I28O5rlYhn03ateblGSGvgh3Y+ffFMxrD\nn0XAM/VbgoijVqucaefWbhRnrmXlc50dRDiX2fhcidFzXa9nr9Pa92ey+HpOf+656s7W1qc9n9qy\nC0aW/qAg1NeDGXWpceX6bSymcV5+6xWoXUntbeuqymPALODN8NmZyEYsB/TKkmazSSU0q465I8/q\nlEWPkLfWVOPWh7ueO++ifC2ca7HNUZVXoFVM357/Yq09ZuYQVZIIblg+r5owqyqTUywIIUc1EVOf\nWt6q5jOsNMAc4hXwp0mT+gsMtKBeb7Iyc5D2+quY3/9ZWusvod64HKwqbLRyEbLmaTvD1JGkCmMj\nVUm/WRXiVnogOc4NEAtgVZFmNfE0vCDPIqaUmGbVOLWqXVslvIiRLMOIeAuYJECHuZJq7asrtFoK\nZKhVtx0IDrN51EbBHGV5lDzfQFI/1PaOWI0eHRZ0GoaJnZaWavxxeDuDks5RFb2KC0aWvD5emSCW\nsTB/kEvX3UR741VVoE8Ep+VwF82A1UWooKqErEaW158VahQ8gpGT1aodzp3O9sH5wqbPR7jPqj06\n+9vDf46l3goqrgqunmcXPSfhhm2rCKrdYYUAa/wqJcvqwzxGIIRWRS6pnH7B6A0WyfMM71uoVnkZ\n3xinruCdMLbhWix1mdpxMynFKsKW4Ejxp2xs/hDLS09g1mB8fCvqEnHxKUJ7G8G3WK1dK1OXpf0f\nZ3LHiynZhM+qKGYVHvZoOYeEDYgM67CG9+FoPAZe6Czupdm+maSJ4HOMhGkgFscJoUGyxyl1hLq/\nGnVzYCNV6RK+CiFLZQL2Bw+ThWmc24Azq6oarMSHSbS/F1xB8gWO3QRdJBIw3yWUQumFYDlYl+TX\nkbkRNB1GbYEgV4HOfpV8rOKC+SwhNHAuAzF2jX0/ztUIKaCacCmQ1KPD8gsZmgdr4/hr358WQFYL\nDiGZkoalLVVFakK1PCcZ1poxaoZqHIah19rzZ5sWp49bohh0KHsFx1bm8Lr6/YhqWflHw3ZSimf1\nddbYRVg4/iXKJz9wOudQ9THc4U8nJnXoM1Rmh+FAQmWeujpqcXhtHCIOX1koJBsgvlZpA1aqMh6X\nmGhcj8czNnoV46Pb0dSn358jNKYI1sAosdTBzMhCk6nL38pcugPnP4srO0QAHVQaRx8fZtwjakU1\nPi1w6WkGRcHI6G14Z1XU0yLEBSStkNUmIYzi5SXU2E6Ke5A0gpcAcY7C/va0qevEU6tdR5ARND2F\naBPiYbydQtLjON8BfwyXFEl7iXIS/AZEN5LcEl13N86NgTRBGtCfx1mfFf8IkWdYyD5wXpm9gA/Z\nE7Cq1smFatd03sh9HUhkq5W5cqYqtyqWWxNSXePUrc2ROAFvghMDMQb9OVDFdO6cDnCFodOvK0j/\nIYrevqpOy4Y+gsXh+UNB1RJBETGWlw+x1I9sXjeCsYAJCEovPT6sZTKMPt75Kq1oa0l6hiyjm15F\nuPRNiIVqrjiiDGuxREEMwSESTmsHGc5xbHQLSFVnttpu0oKUQElDAZUqwOfGQIU5/+fMlB9DJVZl\n/M7wvkmrsZ5QW4f6EkuL1Z2UpCqGrSWT2W14eQ24UYItYsziUMqsh1DgXD4cW8L0Sfq1BTJRot5B\nXx9BJUIsKPxjkD6FK07iLRG1hzqPy7bR8/dRxHlIQs4caBcr91W5qbRMtQvMUPIg4vcgbhThGtRd\ni+pGymyZE/olojhK/gJHpPQDWnYJZs/Q5RhoQGtjpDDCqL0dkc24WD+vxF7gJ1Iqpxb2klJVYmI2\nvCnIGWXRA0mkVJWOmyXEpLI9T5On2tmrfMzZu/Wg7AIeVaHemEBtGQkTp53ZyqE7E9WpHOEIUofa\nRnx+WZUpXK28OH2uA4mIZHSKxzAV2iPbeDofUN8sJF1EyzlSOSCzEUqFyBwr9mWs9zmK3pPE1Dut\n7Va1jZkRLJLlI5iL1bWQDk7vROMRUgLMKNMMOizyrBzaMDRR/ekolPfD0hN1IMcAQTUiRMzA4lMo\nTzFe3krTBbzF02ZONWEFBpTlPMokhHG8BIp4FJEBC/FhVAOJz/CU/QyPyx9DnMWlm6ukqgloiYkg\n/mpq6dX0skdZ9idQ61amjkYWisdZzptotkgRF3FSktwTmAvU4y5EBlhWx+wyzJo4N4IUJehREvdB\nNoYEwXEbSBtChkgdH7YQBw021n+CPI2BvBgXcuqyGy83kmKHlr+SDMeSvoeon8frPD2+xIS95bzS\neuGSkhSYQSPPCS7gpYaT6gbdcrAXZzMkm4c0g5Ul0RbQdAItZ0hWZdZxAmmWpCvosCLXoiEKeWhW\nN5QxD+YYcAr0AM4MI1XVAjFVpNFELPaSymMoEXN1eukxTFdIWmLlSnUzFgkTsHIfqXgS7+t0iy/y\nyGPvY9el07ziql/Ay0a8n0ZCkyzbQm9+HxRd2jaN0idrXoJIUe3S5vFBhmaboLYMKlhZVHmX5Fgo\nBGEGYZbD/BLL9gzCcdROoRQkfRTVJTSdwtLfI5oYlAdIAkvyAYq0D3ElXqr7QAzwsqsq+vSXMOHf\ngUqOodXnaRlTxcqSetiA2X2oVq5vyDYSqRFsG2qOno3Q8XA83s+K30fmAyQPZZeBHaaIT9JLX8J0\nLz5eQxxMULetYBkH8/8MOs6K3g3lRsTVwNXwdgWkObrZH+HYSLIPImkMswcxaYJb4aj/GGgiK8Hk\nICX3MyPvRsolTBeA9dTrr6QsHkezaWrliyCuEMkQc7hwJdH2EX2HEfl+PNsoeZqaXk30Y+eV2QtG\nFlEYxL3M5O8lpQ5mA1KcZbH/eRIn6acjJH0YmONQ+BlIxzE5indjSNqHxXthMGDRfwblK1jq0C8e\nwvyAjtyHphV65ZdA5tF0kMxKevJZevIn9NL7IXUxO0qyOcSEzF2GygIDvY++3cOM3k2MJ3BR6MtD\niPZB53Bpnp6cpMsi+5c+wOHjDmmMMblpMy50UUoEwXmHiGds6mp8fQunun/Fid4Rlu3PkJQwEuIS\nagGzDqqJrv11pWG9Eu0QK+5TTOabMHc1psJk+kE6/hMs+s9huh6fDqBulFI/gsohChyl9emnp0kM\naMSryeUazDKWi48yO3gPjj4WcrxcWhmAKWG6gqkDUVQCpl2QASUDXNiNkCrT0OUE8+T1T9Dx76bv\nHmNr8WNcZTdSS+ND0u8DZsE9juM4uBlm48NEd4haaHPYfx7TPlvTD7O+8RZOpPvph09jPIaUh3HM\nMJD9rNhhjAfJ+F6wJVJI9OUzlHKCzf6n8e5mygA+Xk1IDabSd1Fko5idYBD/BtGSkF9elRrJKSIn\n8SmBlDhXx+kOpOhhrKNMozhpEvQYLp48r8xewHKX+1jWB1mXfowo+zlW/A5d/1ek/ACH+DOSOaLN\nE8MsI7adqEfBNtC1LyOyBZE63fBx8vQyNPWI7MFnxynjIfK4grg+wY+CPknp9lCGU9R5M3m8hUUd\nEMu7SS5Q9B5kxf4rJsuYNnni0F3UbSdqh/FhnCIsEGWKggNgUwzYR822kfsd7Bp9C37dHQzad7Ju\ndAqRETrFXsr+V0hxafgQDK1MxfwGNozXWVaPZW1W0p0sFScgnsClR/HSQ+hXflxaQSXQLq9DUwtB\nGcjTOPOMsItx2wFETLbgdYLcvRW0QbDL8C5n/uE9ZBYQv73SoOUKNX8LY9mtDNLDVUjVTuGsC6GN\nkzYmaRjtC2ARY4DDI6mLWcIk4WKJpQFZXE/fjtCkgXEckTGWOk8wK39KdCucck8g2sbZOMK11MON\nHC/vpZu+QpaOk3yPFbmPmPbwUnk3KpOYHAV/kIHbSz3tZFJ+HacFybrgrgc7zAoHydxVqAtIXCBL\nOfhputnjDOgS9AhOpsjdNM5yLM1S2PsR9xSFNFEW6ejfk+JXMFsg5u9nQR6mLhtwcjlkrzjzsI9z\n4IIVUi6UH6ejyqTfiOkSexb+gJdM/htKVli0u9ioP4Z3gS4nifJJPJeS6zhJEqV1GchRxrXLEgdp\nyi04drAsD1F3gRi34F3EpwLNpmj/h6u+5XO8iG8e9JciS/opRuQVqDwGuhuxDtUtzn1wJ+m6R6nb\nPyfoPKXchedmnO1DaJLwVRrIjZLkIfJyJ4P8CyQT6rYFsetAM4wvg7san1/CP6qbvzqyl1Ff46h+\nhkRkJR6toj/pBJv8T2JuD5YuI2Oeehhnb/EEzXAH68t/wSD/CkmERW1xyvYx6R0tewaTUTJ5Ocp+\n5vxHyMyxQf/ZhZriRXyTENMibb69yrml7ag9Q5AxkDGilAQSPWao+wLTp/G6GZGDJGZxtkhpPTQU\nNNLlaDhGERQ0I7cdlH6UYKdwbgdWXsoCnznvOC6YGdYvT3EwPoALYzT8JrwLZDHn4cN/RFRDrUZf\nHsLcNC7tYKreYJ28gRn/BOYSGdtohMhl/h08le5kiXnEtzlevpsF+QLOtnCsN2Be77pQU7yIbxKS\nzKMux4gEqYN/ktI/xcCdpLAP05NnaOi34eIRJG0FOYaTZhUhs2kavIJOuYzRJ+P7CPJavL2OAWNE\n9lXPPKOGMMHx8KXzjuOCaZYpt46VaMzaE0zrq9g8voOZtMTO7bs4Wd7OuuzVZHYlz/Q+zqVhB1kI\niNbZ7KdZKbdQo43319PxjzPa3Q2tAZku00jfTyaLdHWGl9R/CnlWTdTBf/v7bKq9hZn4PqbkZRRp\nCcl2Ukt7OB4eZSxdQ9fvpW0/yEq6i8nsVdzXfyfXZR+hrx+n7V6FWkD9J9ByHXPyAH/6x5OMj72U\nk8cOsH7bRj74//w9N7zsRnbu2E5n0GVQRAyhVsvwWc70SBsrjSf27OG+++5icWWRS664Gs27/OL/\nejUTm2DUPYLQJKZ/St0dQOImluUA3XAHrXgdp/yd1IrttOuj5Obpo7TiKxmEPbQG1zFwfeq2TC8/\nzIPx97gm/C+0mKQoJ6iFazkl/4l6mqbLMwRfox5/hLaNghsDq+4ksCynjL/AjHZZiCdZl9/ElL2N\nrv1nkmsxYj+BOGNJPsaI3cyy/1sGIvSSpyVbaVjGbPk4Wb3FxvJNzPm/Y0pfj9k4sB/FM5/937T1\n1Ti9nIEYpvvxrKP969eeXrNca5g3kjN00CfP3kyKDxDdQQijMLgKsg+C/hQr+R/ixONtkWBtUhhQ\nWoGTQ0SuJMQHGdgRDtXu4jJuRwYrmDOsvI+jtb+ublk+Dy6Yz/IPx7+bmyb/A0v+bgo7zAb3L7HS\nsyKfHJZ6zRPFeHphkRe13shy2oevP0ojvp6T/j209VWYD7hiM1NsYDH/PJndwnJxBLMZFrqJLe1F\nHu8+wit/572n+5796TsZG3kxSXpkNk7X7gCpMZ+eZhNXs2In6PgDBO+ZKL+LGD6K6c3kuh7cfkQX\nWQgdmvYifv+/38H80kb63R5j4y0O7D/MxPgkm7Zu4sTMKcqiJM9yBMOiEF2i3WpjLiMTR6NeIzhB\nHbRrdXwA1UVe8qI5Nm8sWFd/HRoPoqJ0/b3UrORn/80essxwTmm6Ot3UY/t1h3jn236MRfYyLT9E\nsgNkrMPrCEjkhH8vo/pmMplCYpeBfxixKVJoMy9/SdPW0dBd5NzG/NLn8e0v01t8IyPjLbx7lLpd\nyYFyDxqbpEW3AAAgAElEQVQ+x674G3Tqn6Ed30ApRxFdj7enOZi/j43xB8i5kjI9ReZ6HAr3sTG9\nmRn3fjKZZcS+izyNoUzgrMBsFPULYEfIbQs99yjBrsV0ltpv7D69ZvHfH8C5UVIZca4OWZ0UP4Dp\nVnr5MUgjOH0Zd5c/yquzn6XwfWbkTkZkI7NWsjPtINouoii5CJb2Et1l9MJnGE3fR5IHMBlQEDlu\n93FFePc/Lp9FWUBlEaddRuxW9sb3cGX2M3jtkclNHLE/w4pxNtemmeHz1P1WxvS7mXUfIzHGwXQH\nG93N+LCHxbJPXW/E6wg1X6OgxsjYPDV7I+tHD53Vbxp5lGibWLG7MVHGeSWl1Niol7FQ+0Pq5VvI\nXMFYfCnGYZbZSCYr+HA/YjfiZMAHPv9pDu0pifMNpjcHBl04efwZ0qDL/qdPcOToITSWOJ8Rk6Im\nmCpBHM7X8c4NywwcQTKcM7JGTqPZYKxdZ35hmhAyavWnyfOSkbDE1ddsZ9OWeU7OL5DVlRp15kIP\nlzytxwty3UnmDgOL5EwTrRg+etazXr8flcTAjlELl1PXG1Ef8Npni74JrIb4nZgLTLR2s6xdJkcP\n0GOEQmd4Wh7kKv8OjGuwfIkirdDnTh7r/59cG/4Vha+zefB2Fn2XiTRA8gYL9mUaVmLpDkYZpyc1\nTvL3bLUfwfwSA56gnq5CykS3NoOL63EywJdP47Ktz5KWx9kvf8ETqcdt/n9jYJG6vJYifBGXDlC6\naxnRFW5tvJcBnyPYLibtxZgWNGWBwkeW5IuMaRuJV+LkWmbCH1DXG3hUfo3d+kOITuPdI2yUm88r\ns1+TLCLyh8AbgJNmdu3w2CTwl8B2YD/wfWa2MPzs3wH/nCqt/lNm9olztfuS2n/EcTd12cJdi+/j\n+rGX8MzyjxPaW9mQLmdSruMAH2dUX0d099D180xqjgLbeRPIMhZfTubarPhPk+QEwR+nl56msAUa\nNs6i3MFW+2HgN07327BRluOjtMN6Ch0n+i5BDxLSFgoWWOw9wPrm5Zya/xKHjo5wYOUeJlqX49Ic\nW8Y7/Le/fQBfXkeWeeZ1iZVnehQ2TypKgrSZWDeNw5NnGf1+QVmU1BotJiZHqNVr9Pp9AJq1Jr4R\nqrqtVFDzGYMiMT+/wMmjp1AnjE9MsmnzOvo2ytK9Nfp3jHDb63+Mffd+jq/c80myuiJBSbMtjrsP\nsUlfTyH3kKWXkvwpenIP6hdopBuo61Vk7gRd+QO8XIm3nZjuA7eOZ/gLtvAGQtkguQ4jtpuBW6Rp\n+yh9k53pdUS5B2MTPm3EUwM5zETYiIkjK+sMwtOU8jTCVYQI42k3C9k+vN+Gj026tS/SokHi71Hb\nxpK/j0bcCtKllrZwNPwmLbuCei0nS6fOkhWTfYzaS3hls48vWqi7hygd1JZoun/NUbmdlN2H18tp\nM0m0o1UOyYQJ5znOHjak78HLOrrZFwm6nzHbTinHuMzewCn/fpJ3jNtVzNP5+skC/BHwX4A/WXPs\n54FPmtlvSvWbkT8P/LyI7KZ6av5uqh8w+pSIXGFm+uxGl+33iekmRqREi7vox2uYbL0ORw66BGJc\n6f8FjxW/xeXyP5GXkwx8wYb0ag7bR5n0l9LkCUrpk9n1EB9EZSfTfhns2+n5u+nG/ZxMS2f3m/rM\nhg8x5W5k2l7OjHyQhm2mXn+CFJtsb72BX/ntP6G/Mkmvd4TDp4RBZy/ic3q9PyI32HV1nVpjjFP7\nn0Zyo13PSclD08iAUA/kec7kxg3kebN6rrNGlIJGrc7Cwgz9lS5kDiuqp7+EVpPRkTYbNm2gWdtC\nnmfU8gbLiyvMLJygNTGFFpFmrc7V/+T17LzhtcweeIRPfPh3sHabje47cW4HEsc4mn2YSdvAqN1E\ntAyxe+n4PdTKKwnZK8jZRE8eQqRLsjp12cYh9wm8a7DJXsYJ/ooN6dXUbT1Od7Lie4zrq4AOXfcI\nTbYR2cBWN0miz/2d3+MlU9/LqG7HfAdJAtklTPAyzBbQfARNf0fiBAuyk2m5lSnt0XNPomEvtfhW\nNsuPssTHGchxci49a832yKe5Qm6jll5DcjWSbmPJP0JDDOPPqOuL8G4P0T1M5A2I1sntICZLzPI4\nCxJYz/1YnKLMZihknpHyDRTZLPAUczLHmK0nyRJjXMv58DXJYmZfEJEdzzr8JuCW4ev/Bnx2SJg3\nA++z6jHy+0XkKaqfzfuqkFQzbKb0e5iXq7l5+mfJ7XuJ8hSDtIB5xaeI8wfI41YKPUbm11PYoxR+\nmc3l9ZxyHyOz19Kwmyj9vXSsS91O0UsnGbUdNLmSpjRQmTt7PjLDtNvFFD/AIu8l02miX+Gw28sm\n+UHufeIzHD10FJ8r+CaaGuShoLe4jK+1uGTXBubmjyKnZtBMaNdyOt1O9YT/3jLOjLKoE+s5i0sL\nSDaClAVlOaCRN5gYH2N6aiNiShTIncdpYGFpjrb2OHRwlno9J69Vj3clc2zdehknjx7kiWf2k8wx\nPdliy7ZL2HzFbt75M/8XM8eeIKUePfcZ1D/MSHw5/fxJSv0HlmUfm3gHIb2IkI+ykj5A0sco/XqQ\nk9T0JXj/OUatwSwHGViOd0bPH8bJ95OVC5h8CbMuC+EY8/IpWjRpljcjYR31cjNXTH0Px/RT7JBf\nRwb30skLcrkZ0RMU2qdn9zHt30bSY3gaGP8AuhuYo7p3pYPFU8xLn+3uJtTtP2vNdrk3Uo+3cpQ/\nYaNtpeavJdgyR9xjXGrX0XN9tukboJyhzDZziN9lm7wUnzYxLS3G3AK4KYRXMZLmcG7A0eweRiRQ\ni6+h5R5gvfwSGu9lOax8/WQ5DzaY2Ynh6xPAhuHrzc8ixnl/Iu+Zo49w/eWvYraY4lD8NOtq1xFS\nh7rsYsXfRRF7iG1kzG0j6XFMp6jzbRD6LLkZZnpztBqXkLkv4PUmRniYUh5hmYIRdw2z+sdsTNdy\nOP/rs/ptOChdg0OD32CepwjBsT2+lVp+HFec4Pf+yx2cXCjpLs8SXODSa6/l8T1PsHHTVmLqEPvK\n4qkO3d4BFlcWuPalr0AUuouzaGjRXV7E501atToqQt5YoVlr02rXMYFaPTDRbhNqDu8DURMplTRH\nN9Gs50xtyfA+UE+Omfk55ufnueeuLzO5aZzrX/YSJifGOHZ0lofuv48kNZa782zdvIk/fn/g8p3K\nTTeMkUnOMzzEVnkD/VhHZJKBPoKUbRbDPsbdehq2DWyJQJvjxZPkNsZUNsmiO8Q8Xab0ZvLwCZ7O\nPswl9uMclQ9Rlxrb9HtASgrp0KPDMkdx9hDreDs9+Rvy7HU84v8lN5QTlFyNyDFGw60EeR8n3D42\n6y+CXQXhQWpSIOm1FNkXKbifXnqMwwzYnm47e810HQO5m63uJ0nuc9TsLsTWMSYL9E3YnH4A7LOc\ncB9lveZc4m6mzynKbBHRyKKdZKv7PtASJ+O4mLFR4WDj1xjN38rWwTuAQ/SzA2S6fF6h/4YdfDMz\nee5f8jrnZ5dPvINiMM2hg+/i0i0/R+ycRJvGcfs5wuJWRkdejbpTGAeYcD/EqeJDtBs7CeUOnCjT\njeshfpYl6TMer2A5W8LbVmpunp58BE+bBTnGhF1zVr89v8T68gYa+d1skDei8VoszLGOH8fnG+jp\nPWQuZ8PWcY6cOMgDX/oCrdEplleOYRZYmJ+nXs/w0bN921ZqXoi1Bt5B7A4YiAONqA5oNsehrO4j\ncVmNZqtOkTo8+sTT1c9PpOohFHkGzVaTVnOa6Q1tmrUxTi2v0I096mMNrmhfyrot61icmeeuu+5n\n09Zt3PBtr6Idahw+dJInntrLxz/8MT7kO/zNi2/k7W+rc/WWaQpdYFP2fdgAnD+J95uYLx6iHW6k\nIT16PuGZZAc3kaxOM12G0y9gtoh6w4optsq/InAZm+Rfc4JfYMBLaaYaI9xCX75IGR6hHV8J4TqO\nFv+AZP/AtF1BxBNcH9HqFulYfgfBP86c/DajegvL6UEm+V6K8BFWOERTX86L0ivpuZPMhq+cLaQ6\ngdhukg3wxbXg2hzP7oG0TJ1L6bsPYXKQll5KV3oY93GUeTanV7Lg76GQHsfsL9gs78TiE6gEJB9l\nR/xlZuULTLnrKThMK72c5fD0eQX56yXLCRHZaGbHRWQTsFp99uyfyNs6PPZVeNcvvoeRqY2UkviO\nmx7mNd/5NmbirzPl/wnN5q2UuodkIwR/Bfvcf6VR94zoO+m5Zxhxt1FjD7nbSZ/H0BCBSFNeRbQT\nLDLLRnkNhT/CQbnzrH6n7Hs44f6GifhtzLj3s56rKeQpBrrEJ+/4fSab63j4+BH01BF8bRKpO0Je\nMuhF8rxOt+wwWF5m564rWTp5kjL2iTFRFgmNQq2Z0+/0SVJpm04QFlbm8KFGv7NEZ7FHPctwecbY\n+DitWo253oBgMDJxksOHPO3RSdZPjDE+uZ7166fwBvv3H+PU4gKNeotBuUyzn9FxkV1XbGbD5lGW\nl7oMepFHH3uMd//cV8jqwo/+6A5ed8MWonuGOi/GtMVL/btYdHdwtPzvbMx+BG8lKe7E+QWCbOdU\n8btM1V9Hx+7Du1vwmrHMAZCHOL7yNFPNgEnE/Ck0XU+bqziU/RpTaYJN7vUsp0Tml5j1H2aD/hBL\n4UtMxRP0PEzad9CXyIz/IBviT1Jkib4ZHdmHuQznlVlOsDn9NJVVXyHZOMfCn9GSy9mnf8l09j/T\nSK9gxT1G0MPgCnIyCjnJdHwlpt/NSP5xTtjnqaVNtGWUCb+OOftPNPLLqacr0dTFhyto2gE++dnf\n5hNf/CRNexnz8oXzCv3zyrMMfZa/WxMN+01g1sz+dxH5eWDczFYd/D+n8lO2AJ8CLrNndSIi9tAD\nv8uVV72VothPo7EbFSVYj+Od/4NW42o6NmA0vJjFwd/hG1OICq14Iw1/OX07QuIYSXM0eupZC+em\nEecJOo5Ik+RWEGsyt3gvG37n2073feqXP0JKy+S6hdLtYVKvR9zLefDR3+L33jvD44ePMHtoL6ks\naU9vYHRkO4NiQInDnKOV1SEztlyyi6AFS8sDYoxAjveRohhU988P77/JM0+/SBSD6sk0o60RJsan\nmZ6YpNkcJUYY6IDYL0lAs9kkq9XJykRpRiIxtX6C3buvoOgY80sd8kbGocMzDHo9ypRoj7WZnqoe\nl7q8uMjhw7PML8+xtHgCpOQtb97KP/2uHZyQ9zFpN1CXHRzRO7nEfpxZ+TwTvIYV/UMKGVB3l3CK\nO9nEKA19B3CUriRyOcwx+Rwr5UauyP4tA90DdpS5Tpct7ZtAp5nlflw4QGHPoLLAhP0wPjWp2Szm\nrgM5wIB7WHHz9KVgo07g422UMkLy97Mkn6ZDQZRlrvrV3zu9ZrO/9AHm7BOM29uY9Y+wkVuZKT/I\nurDEsl7GpIcOAyaKN5L8KUTncH4DqgUz/sOM6A/Skgk68kHytIuu309NpvHlVThZoJQaffcVmnoF\np9zn2eZ/6evLs4jI+6ic+WkROQT8MlUs9q9E5J0MQ8cAZvaoiPwV8CgQgf+XuTeNsuy66jx/55w7\nvXmIOTIiM1I5p1IppSRrNJY8zzbYgAe6TNGYpgFTUNSqhi6GohvMUFBVXVS5qO5qypgCg91gW3iS\nbMmSJSNbUyqlTKVyiox5jje/d+d7Tn8IWSaE1V+qe2Wfj2+9tfZ6d+/99jl7n/v//ewrE+W769jJ\nn8LEKTEl8tIljedQ1h5q+Z+hnzzEkPVuMvM1tO2i6JI3N6CUwhdXMfRxGEbKKTK7gSUP40cvUlA1\nfLUIJkOmgq56glZxYZfd1eAvOWp/nGb2VUbFGxDqMCvrT3L50SXWOgGYHIXqFH5nHcsU6HU2kNLG\nLVUQwibym6hcDpF2aLTb6FRisBAEhLEhjRJyrkemwbYgjSFJEiSKaqnAgf1HKFXqOPkScRAihaAg\nK0jbIu+6FHJ5nLyDcFyUTsky8AOf+cVtCpUcY3tKiABaRTh4aD/tbZ9Wv0ur1aVaK2G7OQ4cnGJ1\nM0+xWACR8vijAfd95Vk+9pNvQJ88A8kzDFlvoyEfZNi8i3n5u8yIf4rO1hHao8QbQFkE4hGkqeBk\nNzGrP0HVPcr18gNkaR8pyljyOOXifaCHQQTU5c2Y7BAKjRGaRL+IttaI9Bux+AoqvQllV/FMG5+Y\nVEyQWZoV+V8Z0kfBTDKBwzyP7PKZwiUSKQ6wn/di62GkeC0JTzPB7QTZKhWxyKrzt4yKUTB5yOpo\neZGyeT+O3iBVSyhdx4gmjr4Ox6RE6hmMOYaQawRmnZ6ZZyL5SeDXv38uXKsJfhon9INFhBdA6iJ0\nB8vWCNmgH4/gOHU8crTVN7FMQqY9ynoKpcZZ1Z8iJ4YZFv8Yoy+SCRvBJFq8gDF5YnEeh0MEpoHS\nU5R/79TLtk//wo9xovjLdLN1yrJISJs/+1d/jFMe4U//5iIitwcha0TBOlnax6KI65Xp97bxigWE\nEWTGwsslL4lGVLCUwRgbbStEuqOIIpQCk5EkMcoqUK9W2TM+QaFQIUtTJDYjE+NoI/E8G21S+oMB\n/sBHSgvXcXCtHL1uh+evXibpt4mCgJHJUW6/7S5uPHGYtcUmnX5MfaTE0uIKKYZCPo/rOSRxhh8E\ntNsdsijCLZXI5wVdf5Ff+O9nkPl1avpuDD6huB9bfgjSVZRpINTtaGMh8REij6AH6QKZDJDqGCbp\nIUWZgfUXuHoGYW5DiQroi2h5hJjPIsRtKDNLVzwF5iAFUySwtmkyj8ckljEMeA5pXNLUYOyIvfrn\nWZOfp2bupPrbb3rZZ91f+wo5Y6PlnWDAStdAJwglMVoS2gtY2DTN5ymoEwz05Z0zFR2K5u001BcZ\nxifSe8jpaWJ1GSkUSaYocJBMDOjIr+Gk+/HkCK71lu9bWa7ZRcqnLv4OtqpimwmwVgmcBil5Ar6N\nY3kouiwkf46XzYDJY4eHwKqzxmcpiL0obRPLB+hxEUydjB7SHEBZxxkkCZlIcc0YNrvfqT5Z+edo\n0aeoaqRC863HH8FVeTqdJkEWkwUtjOhjZxLXrpGv1hn4m9j5PFkcUChVMWKH+eEIG0hQEjJhsNIY\naWssS6K0IfEjXJVjpD5EtVJmEPZYW1kkMRGx8Zmfu8TS4mXOnTvNc8+e4eKFc1y8cpZz507zwoun\nuXj1WUanSrz/He9iamgvOSvH+vIa933ur/nzz3yaK0sXmd5XotPsUq1VOLR/hrxt02/3iYKAeqVO\nsVBCC4elhXkuvbhEGg/xnz/ZJe1kDPRVUiIK5sMk0VUsU0eY/Qhh0GqVfrQFWQOTLaLVMYQ4QWoe\nR1gOWiyRSz7CtngCjUNL/i7b1pMo/S3c7BQd8QVkWqecvR4lLZqqRZ8G851H6Zqn6ZhlHHOUBlep\nq1PsMe/G5wlC1ihwbJfP8lmdS+pBuuLPkdpBiFXa3kNoEtASN/Po8wwlc5xCNsVY+ku45nZckWIb\nm2F9mAYrSHEDWh7CcAN9vUbRWKyr/4uUM2idMW9/iSXxzKvG7LVDTkys0uR+Rq13cHHzLGPDEwz0\nLLOX9rFn4gJ2MUekYiLxFKHQ2LpAPruOnr/OsPsG0rhHYNYoWz/IYvuTSEczlbuDNJ6j5pwg1Rto\nqQjY3Vkx6TiOHCLkAXrZJea+vYhlbLbXtzAD6LJO2VhIYWF5VTARrm0jRUqn51Md9bBEQJKGO7hE\nE+P7IY7nkqk8IsuQShFHEcVChfJwDVtZ9Pt9xurDFIeqRElEs9lEYsgXypQKw6Q5jY5ToMFmd4Mo\nCul0Oqyvr1Me2sOpUzdzW/F2vnTfF9jurPPi6bPMX1lgfn6OPWN7ec3dt7J2dYteFDMxNUm71WRr\nu4EUhpMnjnDmhZA4lnQ3NtGlMl956Hre9Np1psduJjJ/ytPPPsXR4z/HUPkglpHYusTZ9js5UP1l\nUqvJkLwetMKIEkn2NEr16chvM64/zGr8SSbtHyG2ErbVZxnSH6GkT9B3ztDJFsF0mEp+g9A9x2ty\nH2STv0DqhFHrPYzxTmRUR6gEi21cs42hvMtn2rrMjDmJlC6ZlizYn6OoJ3la/BLHvfei5d0Uk/dg\nyRcQ4h4Ss4o0m1j6NjbtX8fgU07eQk5NoE2DWF6lZbooe4GGWachFriOo0zqm1nlH4C1X17XLFnq\n4RuJZJuueJ6R0ZsZSYdJZcqBGQ/XyuinL1B3humEZyDNM1H5WUT2IpPW2wn669jeEeLlFbb0XzIw\n6+wdfQdB6CPtKezMx1AjM4KaNbrLbsCLFNJbMHqUJ7/+OEmS0Y8GdDsB5XqNwO/RaqzgFipUnDw4\nDk6+jtY+laEyUdTDyB1asdCaOEuxHYew38XYAYX8CKk2yFyeykgZ32/TzySVXIEsibiyOEfBdXa0\n0RyLQezjhwOKxTHytTIjk1WOHL4eS8PpM8/T7zforK/zjfW/Zc+e63jf+36Ev/7c39JWiwwGfS68\n8ASzV69y9vwLHD52mJOHT+D7ffx+j1M3HWNlYZtme5sDew9Qr1qcec6h3d3kqaefo9Xbzztff5Xr\n9t7OknU/Jf+z1L13Y6wTLDQf5FP/6SI//KGHKQ1HiHydqnsPjrnzJfGM88T2N+izQDmXh3gUk6yy\nZJ9jSE5hJz6h1aPKKYzoYtQV3GyGyATs1x8glDbKjCC0g7G7yKxBpjQT6Q+j1XO7fLYiL+HomGF9\nO5H4FNPm4xjzAoetGk09j9F/iS+nGdMuJllh0/0Kw/ExHI7hqWkq5mNkeo5A3IeQxykxhBEHqSU/\nhCMfwwiPRf17VK27mTT/DRP8/69WeXSeAu9G6AkCvcEK32EzfoQx63YGMmZUvYWFlT/By45T2n+a\nKLyMJQ2OWyLnHSFmhOK+CqkqM5ws7+ggO8MY08XXAzbDRygUpomzK7vspnKJvllF6JNsLm0BMWG4\nyWgtz5WVTTqJBSam115BkVIsHyIVEUkUoJTEFgmO5ZIIdkQghCbzQ2IJOS0I/BZgUayU2VxbIjMx\nrlcjzXk8++Lz1GoFAj9H3q0gfIW0Ja7tkcZtlClAbIEDTiHPG990L7Vyha89+BDrG8ssLM7xtajH\nPa/7Afygz+OPfZNBYpMMOmwMevhBj+XFJVZXlzh+/SmqtSFs16Ker7G8tEI+N8YttxxnabHG3MIs\nFy9cYnstx8d+fj/XH8qjIocIxXbwCPuq07z77T+JK0Ki7RgxoUisDfzsCQr2naTxZVw5RkN+ma3u\nKkHvL7hu/D0c5cNkyTpITSaep6ynaJkekuvRokZZNDGijiccNBFCb+OLR8jJGMucRFiTCF3d5bOR\n7G2kYhWRlPBEGZWdY0M+TNHMMK4/QN96iGp2mAX7i1TYT90MYVs30JB/hTYjiOwMyjqBokdHv8Cw\nvoOaPE5iniBn7iLJrmJZVWrp3TTU+VeN2Wt2wI+SLa4unua6fdfTjL5GR0eUVBGljxAkl5k7Zxgr\nx+zbcwcmfw7PuZdUz+68q8ApMhNhiRYZeSxRRGufNJujnVwi56a4HGApe5hgI+bkp37zZdtn/9m7\nOep+ji9++V+wcX6OxqCNSlIuzy8Tao/zcxFKDtGPAvzONlMH78TIHHG0ietYWMpDWlUEIQKfOLHQ\nRu4AjuIAk/OwtUWYdhGpxNgKgUsadXGLVSr2CPVqjSP7r+PUqesoFitcuHSBiT1T3P/Vx7jxxM3k\nqlWCBNIkRTkKpSTDQ8NcvniZrz/8AFMz19Hd2ubNb3krn/nC51Eqot/pgIoZHz/KoLlJz19m6rrb\n+fEPfohuv0+pWGFtfROlDVGmKeQdlq4us9FZJ8Pioz8+zLG9VWyZQ8hDaP0sigMYWScxD6PEKaQp\nEYk5bIp0eBYj2hjWWYsuYzkBRXMTlfhmWt4XyIkZclmOvPgAofgGTnYAm/1gOaBTkA5CKLI0RNBl\nTvwWaZoirBEqss7Eb/2zl33W/rUHCcTXGDHHQR8jkm1SvknB/E9IXBK+gRYhGzzCsJgmFm2K6f+A\nUANSrmDp43Q4T0XfgrZSUtPAMTfsUAwQaPMoHVFFM0vFTOFYd33fA/41S5YwWUfoLpFKicwqTngS\nXz+PI5ew7HFEdoSt1WVGJscp2B5B1kGoVbQukpfHyOQyWgcYYWPiENe5kSi6QixbFNQBWlygInL4\n2Qq13//Ay7YH//M32VyIefSBvyToddjotOmudbBcl5WVFoMo5uxqjBJltE6pjk2AqRHHAxxXkPcK\n2J6H1nkS+phEoxFInaENpDpDZ3onIMxOUOioh5er4OWqvOveN/PRn30PnuPu6Hu9hDMKgxTPewmt\nIXbEvMM44dLFLpeudOhFMblCDiuRfOqz/wdvf+uH+Mr9X+CNb3kLX7//Afy4hY575MoTkKQMuoto\nu8jtt7+Ft7/pjSTpADfv4HcTOs0uVs7DtV0sfB59/DTKVrz3rXu59ZYcWzzOqPkBLO3Qt85SyPbv\nMGX0EEaWCbMnOBv/R44W3scWj5Nngu30EiPqVsrhBG37ebAmcAw4TJDJyyTGo2QOI7DJ6xNoWcSk\nV8hUD0PAuvksRXkLytgoDlL5+Peuyq/+xr9iYFYpU2Mo/QCpEGjrCmRjSDnA0gmpsXHMQXrWp/D4\nIEavIigQiVk89iKzUZDDIBy06aEooEWAJI/OniGyruKa96MluDL3/6/3WdK0SZL6aLeF7w+Y7/0q\nOS9jX/7X0DxIkij2XfcGouzbdKxv0MRlKL11h4JLiNJVUusqJp7GdY+QJBFKjSPSbbazb+JKRWq9\nEU/uVusIaPPAlz9JGFlYWlBzy2wnbfpph1rVItiMuW5I0Op2WO9Bv9WiVK2BNighyLIEQgvphZjM\nIst6kFnEZEgFShukstBSkyQpJojxKsN4hQJ76/u49+3H2FjdxivYeI6DUjaOY2FZNtrsAI12VCYl\nnm1z8mSdkydGaGwP+PwDV0mVw09+5GM8+NA3sWzFV750H29/x7t47JuPst0OiYIu1dIEfs9CB20u\nXSGDxOEAACAASURBVDiNFpqbbziMG9bJK0PdGVC1VpiY3IORkhs+/DqeODvHw4+scnG9zDvfdQWH\nHyaTFlY6wcCaJRNj5IB18S+JaTFVrJOZBJcZ6vpOavIe/HiexJmnIk+RNyVMlsdXzyGMgxI5AnEG\nbUJMNodJytju9ZhU4Vh3IuVzlNJb2VL3odLdlxkzc5mc8JCmzFX1W0yIn8YxQ/TkfTjsJ5Utcuan\nifSX8U2Vgs5hREAqBHlz5w5k1xpBZD5IiaIK6SxSjhCpzyOsG7Hp0xb/kdDMv2rMXruXv4IyhdIk\nW+vnqA4bqs5HUFYTSwVk7Kdo302cvoBjNEk6jpNcwLWLtMLHSd08W+l5knaBA/k6wskhZIi0irjW\nNGt8mil+kVR8GSfZfTes0TyDMHk6vVXKKkevPcB2BKnvcmV5g0rdQ0QujhMxVNKsb6+QL06Rpik6\ntbFcFy0UUmtM3EW8xDlMUahMI3I5Uj9DCBvXLmFXq5gswzM5/tff/mlKhSKZ1DhYO+yZOCaOs5eg\nrilSgm3bWJb9Mi4QoDac46d+7Hp+5Tc+x5zr8AN33MXW1kHu+/qnuf9rX+Wuu1/HM08mBGEXN+eg\nhETbLr7fYnt9lTNBn4+8aYYj11mYfIUk8FBeTNDaJF+b5M135Hj/227nudNL3Pfp9/ITH7yIJe/A\n5iYccSNS72CYZvRvYawt1uMnWExOY9QWE9YbUdSYSz/PuHMrQgb4aYDLbfS7a5j+jbQ3trEnp1H1\npxgMmnS4StXbi+MKRpOPMCJfS0M9jCdvomg5u3yW54O09Z+AsAhFF8d0afIsgYAR3ccxb0aaF7HF\nPQzRICLE5jpCTqNJcXQRYS7ji5hcNIJy9pGpHJiUfupSExlKjmDECgWO82rrmiVLvjyCMh3yI5to\nYyHMJsZM0uEzFLNxAnxiZcD0yIs34xNhhIPnTmFpl0PuhxE5i4DvMIhfJM8IW/qPkEoxw70YoBOF\neGr32wG14q3EyTksLVhcX2Nq32G6W8s0WxGJybOxmhDpCCk0tYLF2oreEc/LQoxyyDKNV8wT+Fuk\ncUqxWiQIdmb4UkJ3sI00OXK2h7QEJAFGaE4ePkbOdSgW80ixI3YOkMvlEEK8zJqB3Qr73wWKSiMx\naH79l9/Fz/8vf86nvvhp7nzNXQwP72fp6rNcuXiR+sR+mquz9PsBsbSx8yUq3hD3lGP+8a/dhfQq\ntJ8/gx0PcO08/aBDp7vF8L4bsHvzNNYXOXSkyM0nxjn/okPuyMep2TOE/RolT7MWLnIo/9NoHMrS\no5i7gXL2NmIxhDEvcrTwe0gteP7ib2AvV5HJPLn6CaYO3kx76+vMTO2jv30MJUDrr6Lq2yh9BKwG\nq9FTeNKhYb5BQf7TXT5riU8yLX6Rq/oPmJA30zVdFE2m9W9zhQ8zrg7jMovKFhDZQRw5jRLnEazT\nFwOG5SlCdZ7MDPDtg6Q6oKj2YqdPUlI3QZoSSw8pCmTErxqz12wo2YteoJHdx0Bv4FmaF3tf5onO\n79CMDxGkJ5Bihsgs0A4afHHpB1nofxmhXTZbyxTUO+hkZ5BJj+cHv4Mty2RiAVseI8tSOplPLD7D\npc1HSaOJXXZ9ex6/l9DdDEk0rGzOEZg+E2M5qlKTDCK6bZ9WO2BtaUClqEniGKkUEoNlWwRBjyxN\nSdKMYOCTt9TOAw9b2BTIF8s7XTKjUdJChykf/PF3ML/4AnNXr9Lr+mRZ9jKI5++DdP4+0euVVC4h\nFLmizc2TdXJa8+yTj3L46GHc4hDLcy9im4xcbRxMiiMNeeXx+z91PT/2c2/GmCrbLz5PFHYpjY7T\nXl8nGQwYrR8lExkXnj/LynMPsnj2KXpJi+mhATP8JIP1e6jlRrDsLgVPIcQclt/B0q+hyN30zRUc\n4RLpac587V/y/F//KqWtOo7tMnroegrVaRYuP0Z52mFztcHm/HM8++yXKfWOs/ilwwxLHyGKjDin\ncMVBDuhfopW8oiNlKthJjrK6lUr2WobEOMPph4CAPfI9hOYJJA2+EvwmQg2IxGfI9DoFfTMlOYw0\nMzjmtZSyn2E9+ypV+Rp64l/TkjFCzLIi/5QN9TBrZp6Kfu+rxuy1k28VqwyCKlGgaXdDIuoMZ7dT\nTY/TMg/yYvMTnFs+zaPPfJ5bq2/BSg+xGD5Cxb6DRK7hUGRd/xkz2fuZC/+MTLxAEF3mOwtPM4j2\n8fULjzGUTZKzdvfNxfZxwqRFbWKEMIsJ+z5KZ5TqFrfccZx3vPUmxodKxFqRSEMSaZLeNtLs/MNj\nIrI0JUWQpgGamFhHWEIhZBFLCQZ+gJQOxeIQUdinUiixtvI8QS9hz+QeSqUiUu6wJr8fRm8HT5Ht\nwlFkWUaSJGht+NmPvQdhPFLp8shDX2Z0bB9xNmB+/jJxt8Fgax0hJL/yvhuYuekW7FqV/ux3MH6b\nysgwfquHsBS1WhW/0yJdXyCPZO91x9l35CTd9Tn63WW6zSZVO2ZpVqOTo9jBfhAxzcIakbxKJwxY\nfPZJLnzr33D1a/+eurQZZAM6qw10lrG1MIvfPI8VR/idPMuzZ4gGPdJWl6XlOSpextVnRiG+mTBw\nKKQeiDpD8qZdPttrPk6sYspGoKRNqK+g5U1E2Wlcc4A1ztMx1/Pm/L9GmwHaJEgOoPQ4uexe0AsY\ncZzU6pK3bkWnF7EooYQmEQ2Mkkymb+WY+SiB2D3E/vvrmm3DzsffINeZYaV3lpVwFn/dpV4x6MoZ\n7p3+Ya5snsNKZ7BVg0fPzzJaO8Ta1hMcKKfY0mG98SzCgW+t/z7DxesoDv8c/UhTU3u4fPVviLII\npzDEZnNul92vf+m/UMqN0vO32TtxgNnZWUYtizQSLHW2iE3Mm99wC5cvzrLUznjhygob61tMzIxh\nKQ+EtQPPyXb4L1FvgFWAKHOJdUqpWmbIgJcv8wNH7sZyEl64eJp4kOOtb3o9O0Tk3biMVy4hBJZl\nvQxaVUphWRa+7+MpB2kp3nL39Xz9mQv4vWWcwo1YVokkbREHKYETMykL3PnWWxgsngfTwvNqFEeG\niSKP3uYCxZEJ0ixPq/Ft8qU78MM2NWscEbbJWw5+4hN3AlJdZqbisbUi2TN9O834Eva6A2lG2HwC\nS5cwJiQVAVs9hcZFOHKn0mZ50rBPKlO6vVka3ZBynCCkxebqKlLYDJuIp7/4CW5+94/QMucpsorH\n5K7nIU2KNFPY+qNoZdDaIrAeIq/eRE88gMgCStxNZL6KJ24nkd9mVpxhf/JPEAbm3X/HdPYTpGKV\nIb0P377EBf88w/km0ngM6dsI0WzxV9hi/6vG7DVLlievPMDaVoMkGCXeTrCdHLODHns6x3lePcr6\nQkBPnqWe80iziLnZ89i0mHe2OTi2xUzlCH+3/DjXj3+Qhxb+Lc9cWeVNh/47Lp7vsdrp4OlpvFsG\nbMX/bJfdjfUe9UIenRTZbDWoFEr4fkQ+iXDjkHarwwOzfe65eYbMatPrVxjEGSiNdCRh3EWaPFGa\noqRDloXESYRlg8bCSjVTxSP8m3//i5SG8i8hy38MKcBgdrT+DQjxPR7k92NcvvIco7Uml8sRxxHa\nxHz0f7yXBz96EWM0SwsLIARJlNE1XUQc88s/cQudjXkqYzNETciMorm0iOcUKYyOo/0Bq+tLTB86\nhSRjeGiCzC5g7AK626JYGCPutLArhtbGGqW8RX8jxdo4Ry+NaLfWSLUmDCMiP8LzFNtbG1hJjEqK\nlCo1HKdAlvqEnTYuoxT0Cn6vQSZsFJrt9jKOregGEWcevY9Tr/shhBrGGLXLZ8IYjNgBI4lsi9Ta\nQAqXK/rXSZp72Df0Ia7qT3CdfBdb5iy2VhTVHrQ8j6DKSPZBuuIqw+l7gCUG4q+ZMIdxzCRxb4NZ\nPsPJwp8zZgok5tXflLxm27C0c4icHsEblMgN93CjAn5nk5XtWR55cpWLi5tsNhtcWeyx2lxlfm6J\njtpiYyPlG889wH956hPMXlmjs1hg4Ykj+M2Ezatt9ozezthQlcHQGe576j+ztr37wRtjCJKYeqXK\n6NAYuZzNdjAgCAZUh6scPzBDwRG0A0POShmpFxgqV7BIUCYh9UP6/jKW9MjiFIGNhURaDiVR5kfv\n+VH+5NO/Sm24hC0tHMt+CQJrvcSzBPjemeT70sReWt/blkGa7oi/KWWhtYIE0qRHmvgkvW1GxqYw\nSQp+iK0kJ+84SdJv0L76BJlwsHIFSnuuwyiXxtxZ2p1txieHwR5i9txZKiMzpP0O0hFoqbA9QZh1\n6fbWKFUEztAQ3aVHCJVDJhVK21SrB7AtD684QqvZIRMeGYYIQT9MMWjSVBGnhjCLiLVFEGpcW7Ky\nuETeK9HamGd6ZAKZKhYXHkXoDCvZ2OUzRI6MPpFcAHWMUjZGX59nkhMcrryDyDzNEfE+ksxhIB5G\nycOIDHriOZTwKOnXU9cfQCvYsmaxs1swQZkrG49j29exr3gDc80/oOk3cLNXR05cs8pSk3Ws8VXC\nlVECUUNVFLX4evxWHj20hGOF7D/wGp567HlK2w4HD1bZDgeEjTZj1VOsz2+SuAH3t/6MoGEYqtS5\nsHCZXD4kLC+QpjZ55dFb3/3jS7kCEk2v3yHLEqYnJ/EHFgsLV7FFjl7ok5cJZ8/Oc+dtM2z2t/H8\nGOGFDPp9Qr+3g3irB1hCYCwH4TjsK97A//4nv0K+kkf+PZIw7OZUfhcrblnWrmR55fd2f6ZRyqLT\nDaiUc2id0E0VtZKFZXnE6Ta97jCW42CCNjMVCyUF5eIY7a1ZwtY60aVZ7FoNckN4pTLYVZqdJtbm\nJpOHD6IdQ+x3KZROkoQh5596lAP7plGuj243aW0vUsiP0vUbbK+ugXZot59FZ4Iw9rGLeYKtJt2N\nNpYfY6wCQ1aZQWcLT1UROYduv0W1WsAPehhScp7H+soG5559iJljN5BvTJDuM2hr90XKWDyIoUTM\nZZzMoS+bLMdnOOJOkugLjOqfppF+hyevfo2Th29AyxGGRJ6WPk+H56hYtxLodTxtKKi92NkUuuZx\nJ/+cxDyOzfup1jo7rWT56tddrlll8WWDQ85JyofKiNIsL1xe4cLFVQZyG4RisFniysUt9o8cpjph\n0cnmGM1uY2z4MB09R/WIoXS0izcyztGbjhOVAuzyJr7foLsdEaeSlUaI6OV22XXzFjLL2Og36G53\nSYIBNx4f4yM/+A5MkjE6UmHPgb1Uaw4PPnaewaCDk/dptbbQSY8kS4kzgUxirJyklCvyCx/+F3zq\ns79OoVzApAn/T49VqZ1K5/v+zuzmpeQBdnXB4HuJlaZ6B6eNodHqE2WKJItod7ok6YA0Dhn0FkBm\nRMk2P/WRk7S3rpBkAdXRg7hSYVUmiaxRgu6AcCCIe5uUXJfC+Bi97U22lq8yefQW/DDjgb/+FFMz\nE/RokkY+WWkc6dl0G6s0NjdxyqNoJwfOKDiKzMTE3RZp2iNyQmSmifsxzdYiJglIs4RBrw1pgiVs\napVRivkynsqYOHQUrCEGWy02rnyH2ac+vyMh+/dWW75AJCPy5gdJxQSNqMP6pqAxeBZPvY6+JRjK\n/yjvvP4/UFFHKKthmsyyFS2Rt16DMR1ypoE2bVx9hC3xDfppG+jgyruRegDWKJlVwBNvfVXfXbNk\nafQW6Edj9LNFVi6mnBgZY7IwzL1vCrj3xA9z500VRiYtulwm8STSLrHqP8daehocQ9IO2LqgyIVT\nJFaHiu0gGOOGo4eZW1ukf0Ui12tEVmuX3SgES1m4SlKsVMjnS3QHGecuXaQdbqLthJLr4lmaoUqB\nbgCXFnxcJ4fM2tiWi2PlSdOMYfUavv63f8n7PnI7SuwIM6hXDNSAXecQ2AHIZlmG7/v4/mDXfOW7\nybSb9Z4Rhyn5nEO3nxCnGf1BRrPdxSQJRg/IogzfX0JHhsPXnyL0u/S7G3R6DVRllNrUUcq1Ayi3\nAIUSQnoM/AYbV85Qqw8xPH6IcPM8T9//SdbXQ7aXn6MzN0egJMGgQdjfZn2rg7EKEAuSMKCUz9Pr\ndhmaPExtaIyh2hSuVaI7COh2GlhphrQUlcoQCE2WasKwg85AkLK63CRr+oyNlLBKJSxVYrC1hXqF\nzFyYNmkNHqat/oYXrv4NmoeoO3mmi79DK3iMAjOgDalp4HEPIj3GyiDP4dzvYictzqVfwJg9XLH+\ngq75CshnGXfu4bL4NDprcVH9Lib6DipZwgr+3xes+G9e/b7g7OKDHKzdxepkRN8kkBi21/aTlS4z\nOfKPePrhP0LU8+QHFkvLUBrSpEFKphZp9hIOHTjCiH2ArWbMSrjOnuoGc+J+9u6dwW3tw7jLDMJ0\nl900i8hZDpPFMZ6/cJFerczm6jZ5b5gbb7wZx3g8feZ5quUcbg5YG2AP5+mHKWmi8fIKiyH+wx/+\nIdfffhhLfReKunvr9cr1ykFjqVQiDEOCIEKIENd1UUrtMDVf+u4OlRgMgl6QUi44ZGj8KERjSKMB\nWqfEfgy6iSUyDD4yXCMJDI4liJMBQob4ica1h8hZBaTMSO0cOrYpuQ7dVovSkEWmAx557Hn2jFVp\nb8xTmj6Av/giqUlQpSmkA2FvlXYosb0hVhpLZFqwvrhAGsd0oxBjQoKwSy7nIK2UQm0fra0N8oUK\nURTT6q5RKEmm9h6g3/NZXZ7F9TTDpRF0rsvI0Almz96369lNi18gyeUhWubmmUMM1NvYN5bjha3P\ncKRyL7FZwZUTCLMXISRnOj/BILuBpLDF/bMfZt+koJ9/Pfn4DSyK+5E6oGA+S92eQRAzxb08EXyS\nW0o/wUY8/6o+vGbJ0hgEzHiHeeLyIwyP1nBKTSjkkINxzrzwFM+NPEd9ah/DToVLjXPUx0tIFVLL\nzbCVzDEc3wSZYTN7jsLB55m5ciNe3uL8kw3IxYxbiq31TY4duhP6T71sVxDR8jV5bXH3qRvYbPpM\nlMYYGR1haWONJAk4fuw60kxy7sIs5XIRO05QUuMol7e+4Wf4uV/9wA6RV363DZwC6tV/7CuWEOKl\naiIQWKQZZGHIdqNBrVqhXNy5z/bdCpSlhjhOae1MeiA1uDkX9IAwaOFZeVITEPk+XrFIomJSS+FZ\nCqMNrqyQmh697gqOWyeNUnBtXDeHqY6RhAG9xjxZuE1vEDExVcIt2oSDiEi5mLiLpXr47W3iNMNI\nBx0u4KkaG4kk7bdJlCHqRziyhGunCCJIBc3GKpkvGJocodcpUCkOkSYaJ19gT6nGUCWP3wlwSjZG\nC/rhgIq/uyPV00+iGCavhkl4CBkcJ3Fd7FweKUpYFMi0YCn9Fs25x1kUPgV3kyQJOTb6br518TF6\nE59gyKuzsnaZ6eHX0ctXcFSd1eQ8m/0NxvVdzLcvEYSNV/XbtTvgO6Mcdm+i7wywUxenFRP0u2zk\nn0MVLAp6hEy0mV/YQEqPNB1Ce3N4jkNeH6I43UUOalTrQ+iNI9jFNRrrPuPjOYwuUc5VmHbHCXKz\nu+wGcUrU79IMMsxmQpRk+HHM/MYq103toZ+EnDu7TKO9RWl4gtpQnv5an6Njt/Er/9vHGd9T3RH2\nRgDmpe6Wehmn/d2WMPAPOlu7Pxc7xGJhSLIdxmYaJzQbPYxWVCoOWSbQWjAIU+IsIe1npDrDEQJl\nCbI0QicJmeciMkMa+TilCh7DxGaTbnsNKT3iMELkHaJOG50DkSvhJhlaDTCDHtXSXvxog6sLK/T9\nEFvkiMM+wrXJuRY6zhP1WghhU3Bz9LptCqUhfKGxRYpXLGIJ2Mp8MDH9RkKmFZvNNlngUyzWiMK9\n5MtVwu0BUZyiVMhmo4HlWPi6iQlHQGhMu4NJd2/DUvVtFD8EHKXFXyELDYLM0EkinufHaW3m8IMN\nBsM2Nwx/gD39H+LhxT9gbWuVuvawxUHq9l1sdzLWFk+zsvgsd9x0jJWVP6XR3cStHGW/V6cdhbTa\nl141Zq9ZsrRbbb7d/ibXHznOUvsqncwFMUyjsUJlpECeMfr9DoFZZJDaVPObKDfFjzukZpvNSx6j\nZZuuVAi7Rb+fcHDvKc43zjDY3iaox+isQBzt/qfYXmuhZIrfCbE1SKUouiVsS7DWamNiw6Ejo9zo\n7SWKU2YXNO9/78f4sZ99L2C/tN363kE8y7KXhoiKneT5hy3h7x7cgZevuLws/ywkWZoRhwFZptEi\npd31yXkujmORJCGDKCHLdkTDtSWoFys89eTfkUQBlpXD6B3Ks7JtEqOIEbSbK7j5PEq54CrSzjZG\nuKTpgKgVYZWK+HEPR1UQ2RbFyjjr24/iJxERPvg9YiMQbg1RKuMJD+XYJEZSVKBtQzF3iF7rKcKw\nT7/rU98zRmttmThrIKICnZbHTTfeSuaUcHQPWahgO/uwjcHWCUmpj53Ps7mikEZjOd7OjW29ezv7\n2MIyb57O0+Ap7PTDfHXhD4hI2Vcao7HlkHaHgVGis02+6P9XBkmTREl62VX2FPazIr7DRvMqI4Vp\nNuw1pLb59vkv4IcNlv1LjOT6LBcU0yM3YKrtV43Za5YsSWaztnkWLQ3jk3V8c4brJz7E8ta3iYKI\nTfs71CcE9XyRwfYayj6GHlRxi12q1iEGehW76pGEDYLEJsugHawQDxR7xyfIV/YTDAa00wu77E6O\n72dx7TxBEOKViniOg3QUaRKjM4lnG+LUwsQ+BTHG7/+nP6Y+UgIh0EYjNCC/NwORUhLHKY1mD6UU\npaKHEII0TYmihCjeEbQoFAo711tgB6FtdqqSQJLpjL4fYFkeRoHlWPRDTU5omu0BrU4fWwmiKCCN\nIHE9Pv1/fpI4DUh1hmsp0sE2iVtB6QC/v02tPkow6BNnfXLuOBvNJpZSCGVjey6DyCLvFugNmnj5\nmPZ2DAgybTPopWgToJwCabFMQUuCtIuTQa/dxxMWGQqtN7Edhd+PyOIBy4tzmFiSc8cxUmFbgqXV\nbfrb58nXxzh88Aib2x2G9x3boSlrF5G1GR0Zwx/sNE+EsAleccC3osN86bk/Zm1rniX/AsdLdzGf\nLbEu5yk5hzkyciNnrjxEUFzhZO0uHjjzRVKZoXJbbHTXKYopNpinVLnEUDrNVX8LpZ/A6JQ47OOM\n1RFpnzAyHJy+B/jy943Za9YN278nZXrfMHl3lE5wmb3ezcytfoNmeAXcImSGJCnR32xw1w0/z7hz\ngnpuGCspYkzGVPFObJMjSPvI/CoTQ0eI1BZ7RgooE7O2OoeTpfivKOmLjQsMFcd2BmNr2yytbNJq\nbtEPAtLMJzUKXIOKx/nwP/lDymPuS4kBSkqM0LsqhRAC21bUa0WUtNlq9tFaY9sK1/OQlstGs8f5\ni7NcXVoniDVGxyTJdwlThgyBUhJp21jSI+fl6bRbnD1/kWZ3pzW8tHCZxuYiQ4U8aytXee7MGcKw\nhzQanaZomadg2Qx6G/jtddY31wjigCAKcGzB+J4pXNuiVBonSwwizVjZWENqidIOSRQTDRKE1DQ7\nEZZbJeo2yLrL+FlIr9FA5asMdICWCe3NFZrr5xBxhOdUGRneSzmvsAuKwF9j0F9nZW2ZS+e/w6k7\nbuP4a15P6geMjtZBDDBBhOu6FGrT2NUR7NwQiAwvVyTv7Q7LlSsvcvriFTqphWhM8a3l8xT1EQZ9\ni55veGzhT5lbO49enuTBS19nyL6RRrdP3LMZNPsMlU+QhT2S9TG6Gy1q/Sq2ifBSw3R4C/G2YGvN\nZ8iuEm2uvmrMXrPKgj2NHrTYTJ5h5thhmtY6tx79MC8sfhPhZfiDKjlh04lHWJj/Owb6MjfX30Mv\nrnJ+6zTpSJPl9QTCgAPeKZr+k6hiBdsBO6sxPZWx3rhAQe+es9SHhghaTe669U5OP3WO7dYW7e0O\nk3unqA0PU86q3PnGn+KGN92GLR3QAmSKMeqlpFG7Wr3f22YJCgWBES5bzR7VcgnPFVRKDsX8FIur\nG/Q6La7GCZPj4+RtiRQpBkjTiCSOQFiUS3n8OEELm6GhUVqdJnHQJ/RbQIEozVhbm8N2LfqDHlI6\nqCxGKZvBoIHf7zDQDhY+Y5MHaPW2ELkcubxHmnqYLEHjQE5S1nlyZY9B0t3ZrtkpmbLpDRLUtMdI\nuYrK5Wn2VjFRRGvhCmVL0zcptfoQSQROscb2peexbYVXHCPqNCnkp+hFTTyrhFPJc+7yBcrpRdoB\n7J3eQ9JrMggD9ux5I/NrF7F0hG3lEN5OBU87uxkpC/1lsDIQw/QrZzlefDNb0TNkSYn25uNIsYfW\n+hJB7tto49FLXsAfuESqgeePsLW9zNqm4OCeLq899uM8+fzT1Ouv5cr653BH2mwPAuruDE/NPYzl\n7XvVkL1mybLZWUWWDflCTDd+Gnswxbn1v2KuN0+OvahaynZLkh+ZRgvJULiPWf8cW+tbdGWDoOXi\nJCn90KLVbJDFglIpZHOuA3mDu9pFxQE9wl12TclFt+DJy88zNVnn1M0H6Wx1GCibvCzwrn/0m0wd\n3odg55qJNgk6zXaEv1NNp9On1erQbjcoFgscOXIUIcxLZxmbUmFnet7zY7RW5DwLJQzFYplB3ydL\nemxvOUxNDyESFz/qMOi2SNIU5WYYnTIYxKQm5ezpR3buh8Uhke9z6ORRZi89wx/9239HFPd3aGI5\nF52l9P0maZphScXS/AbH95VIdUqlVEPlKxC0Gd0zQxK0cFwPmVekTomi47C9tQom4eDMFMnfrXB5\neYs9owWmD1yP43mMqSGaeonK6AhJP8Ip1NlcvYQnBSJ2qI7UaXe32VxeQgcBfhSjRI2O9KExoJ6v\nsO/mt7F8+js0+32GKkWsep3V5iJJqtgzcSOdcAsGNlk6YGjo8C6fBUJTcAr0WhtM1G9krfMM3X5A\nURZxlUMvbSByBUzq0s1CVM9haEzgVQ6RUaTbtjk5cSOHJ49xevlh7jzxTtzCMGl3npuOvYHntv+K\n/nbGePEgdXcG+P7aYdeusiQx5QmX7e2IxnrGaL6NH45Qzk2hQ59+2CMbeFjeAgfH7uWFxv1MD3bU\nowAAIABJREFUVU4QZuep6nFarQ6OPcP/zdybB2l2ned9v3P3e7996X16m+meFTODbQASFEiRlEST\n1EJqM0uOdnmJ7cTlJK5yUq4UlUq5HMd2FFmVKCrLKi1eYkkURQlcRBEECRAgZjDADGaf6el9/fbt\n7svJHz0EMSzBkcNUwW9VV3d9XX1Pf9+57z3v8rzPo2stTD2iE9qcsT8I5Zepj41REM/QCFt0th4W\nHosNFceqILMOm3sNOqMB0+N1pi0Vxz7P9PL8W4l7mibEcUYUJgghsB2DSqVItVpBiHkEkl5/wPra\nNsvHj2HZGopQyDkmihIx8g5ZEy0jpVYy6Q8dBt0hIuvjDko4dkqcqHieh23lUIVKPxiwvnKHwaBF\ntVhh5A1p9vaYmDlLs3Ofr3zlj2k3u6QZGJqJmmXESUgcJyAVTFPlT79wkbP/9ccQWkCq5BGDAcLM\nEUuDQZih5iqkYZ80zmj7fZzcBGHUBsMkiiO6HRgNfFZeu8LYsTlkHGA7k6zfu4G0SoitHUxL0h1G\njM04RJ5P0SyijVWJgi6yt41MIlI/plgucXvzPteu/zM+8aM/S2cU4QYBRUMlDPoUS5PEWZs0SrBz\nKk7xJH3v4QdcRT/B7caXmB97L+2dfQbS44n5j7DXvEgQLjM93YGCid9POVowaSX7SFmmUBqjkpbZ\nHxvxvulPcj/4DPaOyt3y54iiOWRxwNqtr5MbP0U8dp0l6zz32tfe8Zb9y3Adz3Ko+jXOoXzEb0gp\nf/W7lcrTFZOdrW1yhs2RsbOMlUJGXodyco5moUmj8waOYlOwixzsvMxC7SzpoMnQz2MVfAw1QmeE\nWrQYpGuU848ykvuoRQXXHHDr3r/iwtxPklu+AJffeGvdpNhAYmPqZXI1Hy/OeP3WKhX1NP/w13+R\nKArQNf3ww9EMTFNB5tK3wi/XDQ+RxIaCoSuUSgXOnjtJtzfkzt0dTp5cxjRULFNFVTVGXoChWqiK\npFzI4Q40PH/AxtZ9xscnGPS6h1mLchiOfe1rf0itNEujeYft7ZQkzZiaPMagv8Ht21d4/suvEacp\nqqqAqhFlMUkSk8QJtl1A0fJ8/Bf/EYivozNPe2cPRY2Qag2z2KK/3yH0PTQnRxwOMU2TYdwhIcX3\nJLYKo9DD1qewbcnB/ZsUx8cJwxvIzKSYJXhFn3LhOGgtGt0BpXKN9s4uw1GPkd/FG8U4eRtFlxz0\nXOYn86jj43z5xa9xfvEU1vQc/cynSpVgNERkJo5l45TreJ0Qx3q4kdxt9+j5HjMi5PyJD3N99wXu\ndF4lb+YQco9RLDm/cIy9wR5lo8KR5AwH3l2G7S6pmTEReVze+acMdQU9nOdEOo2hVVAtg76d0Onv\nEYYxN/Wb7O39haIPfzlnAWLg70sprwgh8sBlIcSXgZ/nu5DKq88NOel8BLd3g8z2iAINqVg0tRUM\nx6Y2micRJnsbe1hUsNUVDpKQwaCFpdQxzDp5JUdpbJ6OdxenYEJ+B22txO7ObUQpzyv7f4ga1h56\nMwu5D9NQL6GKEJnplHIGk+UKM89+jL3GLll1hlJBoqsmCIiiGCkzVDVDVQ1yOQNQCMKEfi9ACkGx\nYFKrFigVT3Dp0hssnDjJWNHE0DUKjkEYxNg5jbypoxgm6aiL5/UZ9huUa9OIRCNIIobdNoHvsjO6\nhRuPEImgVK2zs32bZnuPz/7h10lSUISKFApJliJkiueNyOeLaLogCTy296/w/gtLbG/cQ6KgRRLP\nXcG971KdnkYxYtLIRcQpPbeHFBFWrkwchSzUbW4cBOwNYo4t1FhauoA/2sONY8w0IdUdcuQ4aG1i\nWDmSZERzex9LScjZAoFBUdE46PQxSzlMTWF1s0su57Ewe4S7jXuYu6tc+L4fwhtK+t0tZheWCSOB\nlaS47j6mWn9oz4Sp8ZjzEby9Dncbt9DqBseVC1zvfANhxJzIPUa/XaUmVeYmfhDdXeF24xq7UZvJ\nbMB6usYSp2i7MScLC6xsrlCbTNkKb+P1brNkf4LdoUteL7DXelhW8T/JWaSU+8D+g59HQohbD5zg\nu5LKSyPJ/KKkMa7hj/ps38545NQz3Lh6mb1en7pxDL0qKKgJUSEhlCoz6iL1sTxeEFJ2aoyUVQQe\naWGfnrcOZYFSfoa8UUYmdYLynzPq5B56P6/deZ7FQpl4IEizkJzUUYo1tu98k1LpCAYWleIRRn6I\nqR926JNUkmUSVU1QhAZCYlsqjl0AHpSJgwTdUHj66cd58WuXiE4soYgEUxXUx8YIfB/dNKlXx4nD\niCRtE4QjothF0TTSKOH61VcfjCxHCN1GdwRX33yVP//iJRTVJAwkqvYAXyZBERIvcLHtPIqioUiD\nH/mFX6J37wqNY+dwDJ1MUfDcmLGpCtVkHM3IEyQuqqajChNv0CTJQkajPmqaUKvriIbP9dshvnuF\nUmtITokpmjliRWNr903qpSrDUZtiaZyCNY41vUxj7Q6qAbpVY3/QIEXlYGcfp1rFshMsfYLXXr/K\ney48Tity6KyuUzm5wO3Pf4PJI3VmT38vQXONyWNn+MYX/uihPTPslKAd4igTkBPEMuZi6yVOjr2f\ngXuRvdYeS0cWEG6dK1tfhkBSKcyzVFhkXf8THtM+gtwziYcv0pr5HD+1/Jv81lf+NpOVZezyx9gc\nrGKKPN3hiOGoCjyMJ/yW/SeVjh/otDwGvMp/XCpv+21/9hdK5T158jQN7S5ZrDGZL7F8wqBQuc7Y\nYsYPv+/HOX5ujnDX45ELU9QqQxzdIAgaqGaHJ5Y+zs7Om9jRCSx7lrJRJcJm87rFvn9AV95i0L9G\na7dMMXtYJjoSr7MTdNGFjiYskiCgPDkGmPQ6DQ7aewCoiiCKJUkqEEJj4B2WVP0wIssEoDzIbRKi\nOMEPE7o9nziBD3zoCb754leRwuTzz/0H3rx6DdM0EUjqJYPxyVmmZk+wsHAWx86TpSGp79FtbtLr\ntgnSgOeff45//r/+Ll/+4msIYeH5PVRVRVUMMqGg6QpJGqFph2KvQghS4MyZMrvtTVY3m5h5h0Kh\nyMTMEYzyGJOnTpNZDlkqGfU6uIGHmq9RrEygChOz6PD8KysMekO8FCx1krKmUq8voWgGUeZTKB/h\n6BMfY3rmHE55FqVYZ/3Om+wO1lnb2aOxdQ9bjcGU5AoVCCWNg5iBf0CxMs5Lr75KxXT5gz/+Lez2\nkBurdzgycZxw6yqN9X3i7g7X+w/nDTmKUNGoH7WJitdw9ArnnVNEW2+SZir5nM/l/d/l9vBL3Om9\nQRA1CL0erXBEY6VOYzSkJ1Z55EINXX2adblJdTFPrIckacipsXNk0iUeCfKjh6mz3m5/6QT/QQj2\nh8Dfk1IOvwO+8Z8sldcdZVTF+2iXP4cRnsG2z3Dx+i2Etcpo9auoikHh7GUu3z3CeMVCcxSG8R7F\n3FHeWPk95s779FY2+MFH/yZ/cv8+JWFh5+s0eveZnjiHsjjESZscbD8sE+3MjBF2r6A0PoRQYjRp\nEWQmqvAZuntYmiCMHyPvmPQH/oNGYnooShqDH4zIUolhKBTzNjnHwjRVhAp+5HPQGlCt5JlfXubm\nrStUx6d57dJLtLstHr/wPeQNSck2yBkamVCJ/CJxFNOK9rizcpfn/uRrOE4Fy7QwNI1UKiRJiKo4\nCJEhkWiaQuhHRFGAkyuQIlFRObZ0mp1797BzdV6/co/3nDtO34sxLZskcLnxjZdJVZ0s8YgCD3oh\nUepTrkzR6rdIfUGxaDHqpAzdEd1gDJo9eoGLDIboqoNqFfnSZ38HwxxiShMnV6ZYHaeYL9B1ezT3\nAuJBiOeGqDkwrSKEAZ2BR8kAL4KrN+9y+txpvvDCcyyeWuLqGy/Q2evTSlqod1RapzR489t7ZlmT\n1DKLbkNn4JrYNZM15S5GPcSRdYRxmjDqM17yyPdzxLFHYubpBLvMH1lm6LdZ696hrxU5WzpP2LqM\n4s6xb7zB+eqz3B98hnJSRp/f4VDI7rtwFiGE/sBRfldK+dkHL39XUnm/989uYTsbqKnk1DNdJs/u\n0I6vcXzB4sblm9RZxIouUFQLTDhrSCej7jxKrp5j6K1TKqpk1SH3Rr/G2dKPMWAbf3qD8cajbIxe\nI1/Okfdn6A0frqzU9HN07TV8EeGYGkqogIjAMAjDHl5UZWdrk1OnTmKZBjIDTdMoagpGLAhChSRN\n8YMI1+2Sy1lMjJWwdQVLN7FVg7YXYhUL6F4fpzrN2to1Lr+2TxBHnD33FPl8CRSFKPbIVEmxMsWf\nv/gnfPPlu9SqMyRxhBeGGJqGADIpURTlAUwmwXcDyCROPodAQUUjI+Vnf/5HuPSVf0d14iQvPfdZ\n/s7f/CR7t25StIuMBm2SIKUwUQfdxKzXkAH0+4KdtSuUnQpdMeLEYoG7u2381OXuQZEnJnqoWgHL\nLqPnbAyZcf7s06TRgFZ7E98N6WyuoSmQxT5CZgRKSmpKgqEgiFyKVo6Ngx72hETXVBp7bWxVst9o\nM390ijff+DrNoE+c6rTPxST3Hg6dr/d/A9Obx08DZsQ8W/1V8qU7hN4cWTii1b+MYoUkpQVc53mS\n4QKxZzMchniDdUqliKkjc1j7R3it/TzT4zMU8+M4/QKXNv4V+u4CN69uEMuAyNv8/+4s4vAI+U3g\nppTyV972q88BPwv8Lw++f/Ztr/9bIcS/4DD8WgYufud1/4d//gh+6yTxxB/iDjJaa1UemalTrU5T\nXfo+Rv0+G3sb6E6MkMvgZ5ycfxI9MzgwNwnabbzyJit7c7x/HhIp2G71GfhrhIlktD7kVKHMwjEN\n3oaN29m6dSgh4ai4vRA9lAReD/QSvjciLIbcWbnMyVPL2LZOGCZkWYSum1hZitAVFFsjtgz80CMM\nUvYbA+oVG4SCYUrqqkG3ISnka+wfbNBq7aGpJnduvEqvd8Ds4jlyuTytzhb3Vq7wO//639DaaRMn\nCYqigQBd05CZJE4i1Ad0SQCDwQBdM7CdHBIVJKgqPPHk+7l1/ZsEgSBPzP7A5803rqKmGZrIqM/N\nIJwCWgidnkfkeXiRxChWOeo4BLFLNIIjkydZGHuRvY5Pa3+PvWKFMStGqDHD4QhNCFK5QZIZGLaB\n5iiMW5PEcUCnr4EeM+wMsRQNaUrCUURQNcCO2Ok2mKpViH2fnjckSAx2tnrEqoc4ajB7/hGi0Soi\nn8DbBNv0LE/OKTNdi9ht7VESMwQ9HSOr0opvMDPxDH5o4XZ3GIgKk/JJPLFKqXqUkA2iQUSSCWJr\nh4pyFn/QwTYz3MEYo3aEKG/zyA+aqMYUUoRc+dJf3MX/y5ws7wP+C+BNIcS3arD/Pd+lVJ6bvobu\njPHKjRrHjx2wMP8sm+4mS4P/mcLRMr/21V8gr4yTmwyozryHvb2MF67/EXMT87xv+Ye5v/UmGga6\nLlmN/pQLj5+n+41xFGkx8ppM1HTuxX/OaO3hp1TePooX9tjoNZkROZRMJ+655Gbr+I0BcT0lUDyu\nXbvFubOnME2DND1M7TRDAyUhSTNyloJt5ggtSX/g0uz6TNVL7LdHWOah6leURKzcf4M0zWgPd2h2\n2kSpy/3VK9y7t8FrF6+RRApxEiMlKIr2wCkEMjsMucSDEyWKIsIwxDQNTNMBJJqikKaSp57+ABee\nWWR35QZ20WHUWAVV4fc/8wJ/42d+nFwpD06Bfr/DqLWL58UokcAu5mj0VjANhySK6HVbRMmI49MF\n7rRaqL2EzR2TacdA2C5qAqGZoSlF3P6QwWBIXrdoeDfQi0UKuTyOM0ni38EdBWi2g7AFSRiQN2x8\nXzLqp4RZggh1EkLE4izylMakVmZ3+AbvO/ZhLjW++NCeVTmFxhyt5jfQs2nKxjF8bZv13Rs4pTH8\nsM2+v4ITW5j5OUL1HiM/4bzusSZsKuVl9pq3QcnRH6yQs6u0k1sUCuM8ff6n+Nrqb2I759gfvUHJ\nnOCd7C9TDXuJdy4EfN9f9KKU8h8D//g/dl3Fr2MXt3hv+efYGHwJOz1K2E+4mvw+3v4aRW0CRy8y\n6m+y0tghIiIUI+xBgaujb1KZHrFknmWhUOPu/Q3u7F9i8UKe9PoBiZ7HiCcYbNxn/sg0rH2bAMHv\ntpB2HzE5htaxCLUBysgmJ/UH/RUPKce4s/oaY+NTTE1U0HWVLDsEP+r6odpXloFuSFQBajlHZ+DS\nHXpMVHN0hwGabSP9gH60zslnz/A//d1fxc4VgeukqSCMRiRJgpIJpEwBAQ/4wR58hiRJQhzHyAdh\nmGFah2BDxUAhJUNBUSRff+k55hd/AtfrMnnkJDdee4l6pchue4ReLHL3/k2iUZ+yXWboD9Eth+rM\nNFmqcPzoMbIAWo0NhJpRKleRep6/4l3myys+q60e9bEyY0GEpipEPUl51sbMSezyFO3dLQy7RuR1\n2ekm1Cd1TMckFjnSwEWXFsNBhF20kJlHiCR1TEJA/Z5xwvEuT57467x0/X/HyTncGN2gmD0CbxMV\nyufnSCMPzTNRGee1O1/k6aWfYLLQBTPD97ewkjlsy2baOcJ26xaGmudm5KNrKlvpKzjlOdQ0TzE3\nQ8dbIdhWKFV9Lt3+I/w4IsptUS8auJ2HoTZvt3eNRf8/bD1BAZ1bO0U6/T6lXIxQJXv3M/KJzuKp\nOarJY1xtfRMtbpMAHz3/j3jx/r+mkC+RMwSDZgPr+H2s3Y/x+JkTfPXNL3Jta521rRvUx2zWN1Oq\nCzGf2fl23vJL1lNI2yNv1ylvmsSxQEsjxs49Q9jzsIt18oVxyAJq5TkeO/0MU1NlpBQPhrYO51Yy\nJEJ+a05FkiSSoRuQszV877AyNnOkxn/129+H4SnohsYf/8p9yuUCgRfhBx4yFaQiASmJowikQvYA\n8p8kyWGuIgSaqqEIFWHo5EwLmaXE8lAfhixlOOqRpQlJIhEiQUHwofc/gzpo8PEfepIJw2Vmfgl9\n8gTS9Wm3tul1M4a9XTQB3UGEXVAomA5BHDAYDtlv79Fsebx444AstfnR90+hZn06niSKR4gElJxC\npT5Fv7NLSc0xEimdZoOKU8UwTXZaHWaqBdaaPQp5m5ErEGZM5YkS8XGPcvAsbfcqrt5lrHiCySM3\nuPW6g25n/PbOt/nefnb8FCkqQdihVDzPUb3K7e7rTEy8B99/gUxZ5PGpD3I//H0iJSTyurhDMLIx\nEjFJ5m9iFsqcOfJhUs/mtbV/jxAJtpDsDVJqjo3QC0TpECX1+d2/v/afF4t+45bF1cY9yrNTzMgl\nZo6uMq//A665m3zj9m/S+NoW5YVNPvrRD7Az2KQWvofRaMBBc5Nms8q55WV6cp2VV9rU7c9zpfMc\nhDmOmT9AcXqSa81VhN+hGEwA3xY0mqufZSd8kTTV0U2HLPNIfUE29NANncjtE9l5bMXB89psNDZJ\nFYWxSh7TUBHi8BQ4ZKg8dBQhFHQdapU8SZJQKlt0XRfTkPzg8qd44fZnCBz4np+e4dqfeWRuQJJk\npElClqZvsU8qyuFpoqrqIa3rgxNF1w+Rz4ZpkcgMVWhAhhSQZskhJkzVkDJACI0MeP6lVxivF6l/\n4SKf+msfY+gVuPvVF7GihChrkyg600eOks8ZTKERuCO6vS6GYTE7UcMs1hHyNg4ufdXjyxd3ObsQ\nM1kukBTGSfw+ritpbGygmzbdLMGwVQwlR8t1cVKQcUQ3TLADFc9MEKUCz/6NZ7l0+yL93YSnH1nA\nj0Fkc1zbfonN2ERLJObw4Qd4pljEMqKcO4qKxoZxkbEjBqrnE3uTBM7rPL/7BmPlHJrRpnVwhFjv\nslw9R6/dRsu9h4H7OruNVxhFA8aMGTbdW4hcgaXaSda7byDcA/L5SQr6BPAwMeO37F1zlhl+Emv5\nCzQ3+yT2KkG7ijj1eT7xvl/lwpNj7DbXeWPjee5cbOGLjJ5ymZutl+h7IY/M1nl95038QQWZKly5\ntkV9XiE/0FEnb/HqlTfIWxUm5DinCt8L7V97a92DwT7rvRbvm3s//WgfM4vx3ISdG6+z8NSHGPQH\naOEAJ1clCQP6nV0qhSqSlNFgQKlYplIqYFkaqsiQUnnbQJdEVVXSLOTY3CRJEtPpeTT7I3AHdDKX\n2rEyB0NJ+iC8ejuCOU1TVFV96zrf+kIeElkcxsJvI+EjoT/qoesqcZJiWTaKphLHEVKqDL2EdpSi\n1kpsX7vIVMUhX5nDsJcwtDztdo/2/ib7zS6FagnFjenGXbIsYRAMEXrA9z6R47OvRXTdmFavSBJ0\nEFmbyeWj2LQYDAWjoYtqOgR9n1QKUPXDAS7NwvVT8rM1ji28l9WlO1y5uUWvqzFUJd+4+g36mk+t\n8gbJQGW59iOsOd/ECx8OhYzafcKdZarFM9y6+hInzpfYarvkCiskIiIczaOqPbb6MZosYeY7VKoB\nL1/5AkdqUxDdY8xcQPbHyfwWPd+jUs6TRhGGTJirz9F1E2TgM1E/wXf0z9+yd22eZU28wurqDXpa\ng4++78fJG+f4jf/zeX7ljz5InG2TRhMMuhYzlVO0OyusH2wjotMcrXyA5tAn7CeouyZjzDA/v0BZ\ne4L8+Dx76+v4Tfj+s59ibO4UL1366kPrutF95qOz7OxtE1QlqhTgxUBI4+5VlDREZhmZSEmVlG5v\nj3ZnH8eyUVSN/eY+d9dWuHVvldure+w1+wy9mINWj8HIJ8skUhqsbu9yf32f+807NEcBe34fuwDF\nCyGP/cwYqAKRfWuADFRFRdf0t5qMuq6/ReOqaSqGqZOmCYo4zHE0JFEUo/DtMQFFPQzLFCSagCTL\nMDD50z/4Y46fPsfyUz+DapRo7m9w995Vbt+9QtgecfrcBZYXTlKYHKderlEbqzJdm0NqCv2Ry6mp\nFFVTuLXbxRV5Iseit7NHbxiTZSZRrCEMSYJJLDLSSOKlEUbOJlNNjPc7RMZdCAVOqY2V05ipTuEz\nInWbbN2L6aVNNja/jjGS5H3noT27e6nCVDZDHO2wcPIM9xsjkA6jfhdbHSMMDkiCCnXtFI2Wx24L\n0sEEi2OLjOlzyLDIvd0tdht36cYRBUXDdwVKOkfX3GNq6jxjxiJ6WeUgfOex4nfNWe6uvkl36BC4\nBp/7xn2uX1/jXO2TfPDUP+BPX32Rl28/RyPc4sXbn0WJltBFAUeOMQj32dlu49gVBsVdjs58GNuq\nI8QW3bV97ILGR578ARTLo5rP8eHv/dRD656af4bS0jTTZxYJRZdE1cmcQ32VqNVAyogsTdFliqkV\nkaQ0mpt4nsvikUkqxTIiEySpT5x4DIZ9eoMBidTYb3e5u7bBsDsiCQWpknJt5wVU64B8Drr9ferB\nLOVcnY/8vROkWoKqqKCaoCmHBBiqinhAGq7r+uHYsqYSJzEIyGSCogoymeAOB4ekFkmGTDOSOOSQ\nOEMlSVPSMMMuK7gdl/GZ0zz/3L9kZ/UaipSMj0/y9BNPUTv7JI3GAdeuvEKr0WDY7xIOMjRhMFdb\n5MSpZY6Ol3CUhExReG3FI40UumlIGmXEMkM3JbELXuKTxBqpBVOLT2N94DiP/rUP0xj5dCv3aDY2\nsLNFUtUiCWyydETRrrA0M0PeHsOtrtEqrHOgbj+0ZzPOPJ20i59ESGvITOkMqoiJE4tmr0uSCVQv\no+FfoW4sog8Fo1FMomfstzcwgpilsXn66j5eH5rWPdRchfxsQKaFhHFKIJpEniS13rkpqX7605/+\n/98T/l/sl3/5lz998mMq/r7AU1ziwQ63+tfohDvcHGxgUuZ+5yrSV5BujZHdZ9jbpRPcxVDGMJKI\nrrrNTG2B9e27uGkfwzTRK3lyWol8JeNg1EYRJrc2v8LV7NvVsMp2B+lkqHoH19eQ+JiuiZNTCdwQ\nb7CPWSpj5ipY9qEQURB7pInkyMw85aJDIgVRmJIm8SF5txTohomhacRRSDfoMVmr8pmv/Q67gzWS\nSOc9j32c+cppRukqgT9OwVT5oZ9/lr66SXpwqHyMqh/OqCgKGaAIQZpINN0gyw7zkkxmSJkxGPQR\ngFAESfIg70GSxIf/k1APiwo151A0aX3tTS489QTFQoWBO6TZaLC7s4vX3qRWrlOrVDhMglLCos2o\nP6DbHqCZGUIzUKVLs5siMtjuhIQjBcsWCJGimpJYVVAUE1VR+LGP/RT1ow4HYZu7vefRwjHcpiRn\nT6EGBZx8DmMYYJYcJvVJ4myfgddjFEqEZ+OjsJL039qzjy5coB20yKROlLZIAsEo28eyTOq1CVRZ\nQcliDtohxtBimFgYSUrXGxKlLoka048Ei+OPk2UJQjqMF5bY3X+NUuFR9jc32OvtEomEilrg0nMr\nfPrTn/7l77xv37WTZXJSpzY/j+cOMecDpufyTB1RiTshJnXMzMEd9ZhQ5rm/chnD1ilqE3hJSqPT\nphn02RlsHU7DaSZBKom0NVZ2V1iTa6x19lnvvc5Tj//CQ+vaizW0SZdePyawWiQlSaa5RCkIVYVU\nYbC+wsjtE/gjhGqiINhvrbK+sYMAJsfK1KrjqKbFIZDyUABH6AaqZiMTwXajyU988Bd4YuqvMDaf\nEGcJt27cpBcpdPrXiOIcly5dYXF5nI/8dIHFD+XJFw6FXIUAU9cPycDzuUNyiwzS7NuVMh7kSWly\nCPJUFAVNVdE1FU07LHUHYUjedlAthYPtEapZ5tbtVXAjji4d48wjZ5haOMHKxiY3N1bodQ6oT81Q\nyBQWFxY5dnKOOBDIVKE+luPUEcgXBFmqs+8mXNuQ3N0SHHSg04TNXY+f+tTfRVFV6k6Ftr/HlLeM\nEkSUJmZxpCSkRxD1GciQ1YNN7kerDLOAWNXJRRMYsowejz20Z63mfZJQAgki8TAth8nS4+SMKdzh\nPkGyT9kqUC/lyU/Z5J2AkIwo0qhZs7R8DRH36LdvcPL0R/g7P/6riNSloiwTpx082UZJdJR4SC9+\nZ8KKd81ZxoxZBsF1/tYzv8HUgkrSPcKpR54isrvstF4lbKtoRgHRC1ksn8eOZhh0fJZJdjZ9AAAg\nAElEQVQmHmF8aoF8ush47mlOOp/EJyNOQwyxgF7Lg5xCN0FEOV5f+/pD63YPIrKwycbwLtEoYxSb\nqI4g6x/C3ZUMktEA32+gagmFfBHdcBCZZGXtdVqdAbqmUi5a1MoVLKeEqqkM+gMUBFYxh5WroysW\nvZ7Lz33iv8WMF7j/+k3svEMw6qIrGnlS7GkLPy4jZt/L6fPHeebHjrP0lIKu6mRJTOAHkCWo32Je\negCxG/QHSClxXZ8gCA5PHU15SN9FUQ5hOrEmCVxJpAj+/W//XxxfGKc0McHd27e58cYlGnsbnD6+\nyJljJynWJrlx+yadzgFXLr+MpTssLR9HNyws3WHu6CTzEwpLMwq13GEZfeDHNDoGH//4j/FLP/PT\n+L5LvlhFhiE1p4Q1Pc2wKuj4XRLHwC+0CYMRtqOikNLZG7A3CLBEHj/JUZ6cYq76cGPQyzSOz55h\nNlckkXn0Xpco6hKm+yixRRKOGMYjHCcPWR89Z1Mo1hnTbdxkjVlHJ18p8lT9B7l++6vs7N1HWA5C\ngBL6mE4Vq2IThhVQHoZHvd3eNWe5fWufEwsZN5r/gvHOswTudS6/3OZUcYkb99sYFR0tneK+cZXJ\n6nFi2SequTSaW2S9BpYaEvoW7fyblFUD6SrsDtZwrAKj/hBF75GbsFDVh8uQwcinPyjimBn5fEyx\nlBCUVWTcRmYpMk1QNRV3fQMvcPGiAEW1QBH03Q43V27gegmWIchbFoVcDlUz0TQFXdEYz1tMFfMs\nHpnkzPFpxisW/+Mnf50js1WU4j2OTj/DsYWjeMUB66tXqNUOaDe+yHb/Mm6pifl4xLFzk/RHfapj\nddI0xg8PHUJISb/XI44C4viweSmEeIvN8lDl+LByZug6Qkp6bR+ZqoSeT7tlsdPtc/3aZY4tzXPm\niQu4gcsrF1/ipVcu4oYpj54/i6ZPkGSCz33+9+m095k7MkMcB8xPLZFXJWdOTfH4iTIXTmt8+D2z\nfPKjFwiHbUq5MnGikiFRE+jFQ1baN6lIB2GkjKI9clmdKJUMUp+aalKqOGRZxn77gJxQ6Q0auOnD\nw19aaY37O1/lWusqcQCOXSdIDyjGOaLEJ0gUerFHueAgFQVTN5DhPqkZks9NUCjNY6ZVXu5/BS1L\nuLP1MkF3k1hNMa0lKnEZGQUcnztB7KbveM++e5ITQuP1iwGXNod85eptppOT1IrTRLLGE8dmyUU2\ncZJSGK+z1r1Eq9MjjV3WvStkWg5ZsFjb+z3uNJ6nud1ga2OEIwqsbtxiZnqOLJgh7Vbph288tK5d\nGTEpZqmq7+N7Hvll4iRPeeJxYkPFtIpEUiWMU+J2m26nSRwEaJqOpucQaUa322Rjewuhaji2jmMq\n5KwcmqrRdftEiSRXEFi2ymAUEfuCybFx9sI2waBAEnXZGLwOmssjJz5EEh2jXD5GouZRlJiptbNk\nYsD4xDjeaEgcR+iqgmGYNBoNPN8/nF3RVGzbxrattxj54TDtUBSFJEnIlxwOItDVjOEoRWgmn/nM\n8yyfXGZ3P+TenSuousOnfuJv86GPfj/D3oAbb97gyEKJRx/7AItT86xcv4WQOsePP0acpkxMVFia\nO8nCkTmW544yOV6lVpvCzhUYDQaYImLUP2DH6+EOR5QUm0pax5DbOF6JtBlx5GiesjmHMDVUAabI\nMVOdodXyyPQBneHDuNtRSyfUJAXdRrF99pUmMi2w7naJkGhqCTXUaA5bSGMCx5xFVefwo4xCOcda\n7zKRNmToSTIz4erWn7Hp7hD7JfY6d2hFe/hZjN9LUJJ3lpx415zFDRtM1D6GruqQzdDTFNaa17h8\n51UGA5W23kLLp/SGKd6oiG84EPlMGjMUx2cI3B7YOXLJoyTTTU49fo7pqWc4/+j7GcQJhaJGbfII\njvMwW8f2tke37SNFk+dv/FNif0Qru4Yw8sRhH0dAFAa4+LibN+n0t0llgmmU0Kwcfthia/cOW9tN\nFEVSyBuUSxY528E2bFwv4aAXsdv16QcZOyOPTjdif28dTTiUrFPY8VG2bm4R7W7RvrPB+vBN6nod\npaOwublCfx9kEpK3JGmS0W53abWah+EVD0SOkhSQCCVDKDzo8wgURcPOOUxPVpiqFpEyYNsLKeqS\nxPNA5ljb6KHLNqfPPcFw1Off/t4/YW/1Fkvzs0wtLXH18hXiJODCe3+A/NQMuzvr5PNVOns7LBw7\nhyIkan6MfK7MeH0OS9MpFk3iICKTCbpWYG+zg9/XUKwyB/oqH3j2b7Fv3WLHSxEsEUW76Gad8dxR\nirk8e26bybqCFRZQ9YfpqzIjo6xUUK0q5eJ70AwNQ6ZUK2OUMou8opBzFIp6jTFTMowO6It1kkhn\nd3uXxHNoNYZsdTfpuBEyMcjncogkwg8CTNVECWz2Rtu4UfiO9+y7pylpzJGqgnrhLNLZQtR91PwO\nxVJCqISobgnLddjcv8/skUdQZEAipsm0GnvDVQwRoYk8abBOIVkmHG4RxCNkqiM6LTrBTW7sfB1D\ne1h9dvl0meJUnvmJZzH6dU7NfoSdRhM7PSSAcAVomUQmksHODlIkZEGIamjUxmYx9Qqd3j73NlZo\ntH1AJWfq1CsO1byBY0LeVMhpgnrBZK5iUyvanHcWmKguE7SvQ+YyKSbYujiicz1k9EeTtL+mMrpa\nwOvm2N/YQdU08pUaCwtljs5Pc+rUMmP1Oo7jYJomjuOQyzmUSxXOn3ucj3/8h/mrP/lJTh+fZqKk\n41g6WZZiaypuZrCJilItoZWK/Mmrt6Ho8MJXvsTpk4/wAz/+c+w0m2zvrRB5Q+YXT3P7xiU6gwbn\nH3mMKAlQhIJpOJSsCmkQoYRtUmEeEm0okCURhVoOqWcMRx0u37+Plgo6+7vkyxF/+sKvk9PzTCwG\nWHqKnkvwYpe5hSf5vqd/joruoRZTeqMheuXhGSQ5LNDqDej6N1nf+SyaTCg4OUhG9JIuqd4iLkZ4\nfsz9zhZmajDYVzFsUIwytawOsUbOTAgin74Y4bZM9qMmo7bH9vYBumqgiyIRfd7J3jVs2H/3Bx/A\nl31GgxGOlkfJeWzvbpO5JY6I41BIqYTHuTN8jqhQZhg2UPolVEMwO1Gjak3T8Ty8sEMgdxivn8dL\nNkkaDk5hATHRQulbJPKA39q7+tbaP1V7LyO3RzCIOPvID7G/cYsoSDk29OgPYiZNGzdI8VIfRxg4\nc8tMH32S8emj5OwShm3Q2ttAypS5+XPMTkwzMVbiW531b4Eev8WGL5FEcUi31+OFF7/M7//OP8EN\ndQxbIUkdOo0WrbaLZWmAThzHaLpKlhzCaBQlY3xsktn5BQq5PLliDlWVZMEId9jjxIknCFOdg4Nd\nvvn1z6MqKnF2iEj2fB9F11BkilWwqVfHyTKXJMwQMuEf/uIneP3N6xi6z9zCozQ7fbZW3qBaX+LI\n8lEuvvw8z154Pyvbq6R+iGHpYNrs3N8gNQU5p8Ls7DwyjIl0QbUyTjxqYtoFvti9hFE2SbJNdDGD\n1NfZHzbwXJsxZ5w4g17YpmBmjDvnGAx2iXUdL9hBNQT/99u0QD8k5yiYefrBgJwTYWRlTOeQfUdR\nLHp+i7o9hudG9NMUJVUp6xNg7qNmOUJXodnvUhorkzegrzTAs0kCh5rQMI0i7YMtSgtnqegl/o//\n5t/8hdiwd1GfZYfhEKqDY6w2twkCk4J9FA2HgXFAagd07HVUS6WmqeS8OWYni0zMtQiaJvvKCnop\nIicEpco4w9EBBClG3kUf3yXqtFHsffY63/GkUELyap3p+WNELZdOcpepwgSN0gi3NcCPJfm8iSZ0\nvDjBu3eLBJe9/Xvcvv0C66vXsAoFVDNHf9hgY3+Di1eu0+4NHpCEZ29BWADSKMP3JOO1On/1h36C\n0oxFkiRsrnfYW9/B91NmZqZBGtSqNZaPz7OwOMejj52hXq9w6vRxxsZruKMGJB7BoEu7ecDa/TWC\nMGKvvUersQJKQhAfQvrTKMb1fYIoQQgV3SrieiloAlVoaLqBapT4jc98geOLY0zOHOf1l79K0bE4\nee4x9hu30byIpx55jOtXvs50vU5/2GFseo6oNwIb8pZNmibYhkWS6OhJhjfsoGgGa+5VjFIb12+x\nP3TZ2HqJJBiSpQVMRWF/eEDf3mSyWkNm46jSY7z2GIgEVeQxlfJDWzbuTJClPkGYQVQmTB3ioEC7\nEyPCMopUGR97hkrtMd73+I9SyKvgNInckDBpUp2vs3xkGpF5KHqI35MYoojBEN9M6ZWuE01EqPaA\ndrzBO9m75izRQZnyaBlbNZAdgS4gzbqM4iF6JgnDhGHWxDDHWet2sTMdXUq8XpHceJucmifZGxCE\nKVkakiURjX2fRG2RV/O4oz4HLQ7HZ99mSpZD1fNk/YBucpO//ol/Se1knfLUNE7BxI1GZFKlbFvo\njkqkKXR29w6hJ0qeTnuNve17xOGA9sEOMpEkImFjd587azu0+z5BGBAnMWmagAbFgomiKKDaqO5R\ndvd8JsbGOXbsCONTk9TGbZ548gyTsxNYtoOQEAUJY/U6uuoQBB7N/R22G5usr91n/fYKrh/S8zJG\nYcbli69y8ZWvYqgKvdGQgReQaAroCoqmk8vbWIpEM0FqJqphEsmYbmDwtaurRN0+j33wg9y6/gqa\nCovLx7j4zT+jPDFBbJXw/YR8LoeFpNnewTELKDI9RAmkLkk2QNEcknSIDAJ6dhFbP0+m9Bh2++Qr\nc/jo1EdHOGedJpUCK5giDhyMNGKvvc7d7VcY+CFCzZGJh6thPXdEkhpYjoIY2liZgdSLTFbOINQE\n1IydwRfotNZo965BWCYbKbSTIW5qsj14nbS0RaVq449s4q7DzvY+ouCQmCqPzH4/jy18FNXXycz/\nDEvHO94228nXkEGEn7p0DjKScJHKxDyeMBCBySBQsZQJlufHWTy7RJDLKBbzDHE46AzIxh08J0KE\n02hOgmYWGI0s+n2PqfFzOLqPkzyMM7JUC5KQ4pjNcABf/eb/xjcv/jukG5DPG7TaHq1WA8NQKD3Q\nPTm4cxHXO0AzVUynzqh/wN7ODQb9HVIy4jghCAZ0Bw22d7ZZ2Txga6/DVnPEdtNjrx+wN/BxvZiZ\nxRPk8wZSpMRZShSMcPse7eYGWdSnWimgqpJRv0m7u8XQawKSUrmM1x/Q8/q4WUSQ+WxvrPLGyy+R\nK0+QhpJEs+l2Pbwow1BtklRlMPAYjfqYeYPF5RP4YYCqWRxdmuP4sWVk+QT5qSm2793j1Jmz3L70\nGscXH0MW8ly9dpHHzp7l/v3rFOsT9F2fWCYszE2BDkk4JAkCdEulVDZRM4tY9bi9d5fAE2jWGZYW\nH0fJ5lk76OJaQy4OGtTcWXJ9g6w9YHDgYxQex4wCilqJwI0IvqMvOFkfJ4g9HCHQp8GeKTNZtRir\nT3EQ7hC2FLptqJVMVAw+dOETzC++l/nSLI5tY3aPkA4sVoMbHF0qcvr0JI+dWkaVQ0yrw+3bL1M2\nJjl9/CNE8QrvZO9eNazpIMMlbvgb7K6ptNubZP1d1NEOC7llEplS0BxSPcKPbfZ27+C5McGwTBZE\nFCKLZKBSMKqkRkyWuUzVLAr5HF5/j8BtESUC8R1inpocYldjOv46v/Bfxpw4s81MTjDwRiQyJklS\nuiOP3YM+ikiplsYwkhivfUCaJZTsCk55CjQD7/9h7k1jZEvP+77f2beqU2tXdXf1dm/f/d7ZF24a\nciiKEkWJWqw4VgAnthEkiODEgBwEcQIkgRPBCUIHsZEgtgRHgmTDlkiJWhgqpEdch5x9vXfuvvTe\n1bUvZ9/z4Y7nTjOkJFsJxs+nrlMH9bx93vc5z/s+y/+fROzceRvfGeE4DppmIssqRZExm4zRVI16\nxUQWRIRMRlIlLpy7gKmZ+GGE4/ikaYLrRYwnIeP+EXs7twncCVPHIQ0ihEIgiyOiNKUoBKIoxHM9\nZjOH2cRh//CQ/f0tBqMxN+7cJUwzTFMjjhN0TaLdMhjP5mRRRpGkPPX0U1iajIFynyg2jPjezRlR\nDkfdHk88+2m+840/4EMPPcqNN1+jiCKmgzFrS0tcufomgiwhaiprKyeRZQFR1hnP5gwHQ4J5xG/f\n+xZIAePxPQazl9juv4Wfb7NpPErfO0SVIuJawogeeSZTWvQRnVeJSgKitoOT3iMvHX+795w7yLJJ\nUOjEbp/5rE9/NuVg9D0WqjJqWbgffBB1pDznq1f+RwaDN8kVF0118dxD4qTF2aVN7u7v4Rczlmqb\nSPqIVfsjlKwGotblha1fR1aPv1zfLx+YsZw8r3CxdR5Z9TjdWqdtPQxah0iQmeQDMhTCbEYY+5RS\nBVOvsVJeRRF0KlIdVW3js0dWxHjTiMnAYBpMGMUpltqiP5mgFxLN9nGXfm9vh+WOCLLAP/3nI964\nYZCY59kw2yALtNtlesOM/nzEwWCEmCeU7TLTq9eZxzOCxKdut5GpIcgiTjBFUkzm7gjPGXPr5muM\nR2NSsWAy6nP77j3COKEg5e7BLoPphCANkJAJgpg0jqjVNFRNJEkh8V3SNKZUtpBkiWA+JopnRIFH\nEAdQiAwHc0REkCQkReeoP2UeZJw+scTKWhvTNAjjOQu2cr9+qmJycDDDmcwQZYHN8xfJBJ2555Ij\nQZbSKzrozRMMBiPqCx1cb8rp85dwZnPUqkHvaIDnBbQbDVI/QJJUJE2jEFLCIMJQ60ylA8RqmbOb\nj1Bfl4nSgBP1Cwh5xDtHryPHbYaHcybTEVEMO9kQ6dwOwvkdzKUtKM0om8uUxeOt4KP9gCSdEg7n\n9HoZO4d7jNw9vHTKYCBilJYxNYOD/DaallIvbVIIAUNvBhQUpWVEISY5aqNrDRAnXJ39MZqxzOFk\nhyCDq7tvousBjpfww+QD62cRpRp3/JcRqxZVXUDKCubuXWoLNmMnRJEUNKWMF7uQRgSyi+ZLGIXJ\nLBfebY6yaRglImmIoqjkoYIiO0wEB93IaNun2cuuHtN74rEWr127ysnGh5llQyLP55mnQra+oSJm\nFkYtx9BD7DLMnAzPP2S93aZSNggOtpHPrpBlBe3OBtNxlzSM8Ocj0tAhzWV002I8ustgqlAp1Uiz\nkK3bL7PYOUsaxfzeF38dVZTJsphavUQSZziOz3zmY5YM9GqTMJghCQqiqJGGKRI6SRLjOSEIAkka\n44chiqIwnc6RZAnb1qiWbQaDEXalxHgCe4czZHmKJMnIWsLB7g6t1jKFEdNeW8Jz6kwHPdSKQRFH\n3OkpWNmER8+e4Zvf+zJPPfoh3nzjVTbOnGc47ZHj059MyGUDQx5DmlKQsbK0ztjd4orpoxfw8htv\ns9yyMJ02d7zbqLJJvVVBTIaonoollHAjD0lOOXz+ETJjj6WVElXzQyys5XR3jlcdi4WMrtY5/+Eh\nYqBxszvEKgUkvool6uiBQbW2SEU5jT0ycJUpAzGgaS4RxTZnWw360wGWWhA4Ad00ZkNd4IAAs5hS\nz20G+RFqIbNUOgscD12/N47/v4zhzxLP2SeKXbRYQBFiMrWPalSIMos0H+BmfYokgkxGNGKsosUk\nGNPNjxCShETYp1au08tdOo1NdM1D0HTK6hpxGpOHGX6xjx58X0bWNdAEm5l7QJ5GWJrI5VttrI6I\nBAiZxuaawaAXUK1KKFKJOwddZr5HdLTFYLSFG84QEZEkDUVTmTo9BFGgP7nH0uoZskRAEmJC32dl\n6SKybNDv3eOLX/o1ZEWiVqujKDq+fz8ca1kW1WoVVRZwZiP8mU8Y+SQpyJpMlEUg5MiqREFOrV5l\nOg44OOxjmBqaJlKp3TeoRt0iie/nRWr1CrKqMRjPmE5zxmOPtEiIg5TAm2JXNFZOrlOurRMGKUmW\nkuuneasvcHblJJdv3GRhuYMsqeztb1EpWzTsEkLqEycRlVIZdzpn7nS5Zh2Sa/cQJZdzJzbwY4iF\nApkKA6+PHwc4aYGiqYzmLpNhgIWDO/Fpc4Y8LHNh9RmErIwTH49g5rFK3VpnNjAZhBJPXfwZKuZD\nzFwBXavgCQ79Yp+7zl121V3mfkrk50h6A5I5/fwOeX7A0A/pjmYInsHYd0gGY6YTj+5wjJ51UGpr\nBNLxncj75QMzFqNSotqMcaIpfc/hcHLEgqnh9YfEsoYTO4iJxWbnItkoJRRdauoGZlFFs5cQszJe\n4NPULfqTO5h6m4ZZoGguhDqa2SAJGuj68XB5GCest8+jCw3kQqe67LFYb+LMVfIEBEUk16BWqzEa\nO1SrAlbJZrc3wJ8EjC5/l8HRPSbTXUTZICHHcUf0e3uMersMu4dU20uknkOSeExmh1Rba2hGi4P9\nLaQix/ccppMpWZzjuS6zqYvrugRuSp5mGLqOHId4rsvUmTOb+vh+iOPGBAEUucjiYodarUrZtjhx\naonQT3FDn1S8X8pvlRWyOKVkiFw4t8L58x0cxyHLs3dRYyTC0EORFRQtZfPch1hc3sRqmti2TbL0\nDHJlg0LI2drZJVdkkkxEVMvImoChW3i+j58U/Mzf+Hss1jqU0wW2RzET38WwYuRSzsrSKmJeIDka\nC+UznD3/BAurEs2WwcmNn6PaimmfXkDKTvHWva9RYNHoHKecOPFIm9S4zoX1/4x4ntEdjAnzOYu1\nVao0sK0YgQmFOmE6m0IOpm4zGt/EjaaMJ12O5iIHvT1iL8FQTabTiKrSoSXYVNpl3HiIGMT4/zbW\nhi2UL5GHLWS9QRZpCFGJu6MeeaGhJAUNsQPWkJ57mV4eUc3rZFlOFosI8ohMmCOmMwI3JRECZvM5\naXIfh3hRXkWaJiS6S1k4cUyvmGQUrkqqO2ycrOKPbW7ee4lYKdA0HSXNUMkpl3M6zQ12ticsVTU2\nVxaYhS7T0Yj5ndfZ3blJqW4iiSJhENBYXib2HC7f+hrLyydpLV8AQSIJE1rVBufPPkLQHWJXS6iG\nwkLdJs8jXCckz0E3debOnDRKcbyETNbQFBkRFUmGwI/RNBGEhCiJEYQMVRGoVysEXoKQp6RBTjjL\nCZKQ6XiOZhTs7M4hSYkDn4984uMUyX1gDFHQEBEJA5+iSEAdY5R0TENHUQ0KMWdh42EWHv2rnP3k\nL7F64acpNc/eh32axnhZiVNP/DVSa5Hrl1/iEyd+miFbrOklRHyyKKaiVdCUlPXVDdbX2iTJDUbz\n21QX96ktWQiNb6O35ownCWE2I45cJsMdzPR4rkPiCDeUuL33T6mpjzIZenTvjRDkDok0w5AaJJKI\n7D3ExcWn2O0NqZfOY8WrVK02DU6iujIMI0xRZNZ3aQgtvDykMGUOu9exJA0hz9DkH56k/8DOLN3D\ntyg1YqK+TOC6REbKmrRGf5qgLkQEwzFpIGGf36Ej6/R2ZgiaRLNWJQoESgokRchoNEKyLKbuPs7M\np1VeI4i28VOfFVnncHi8TfRoNOQXf/K/41t3fo3ueA+t5fHJD13g+W9cxVbPgiTy+X/41vHBfvv7\nE1XXgOeA/50fLP/TD7x6n3fjxg/87s8jDz90gpXOEpPhDM9zqTWqZEVMxZJIfBlR01Hepe2zNAPP\ni9AMkf444tIjTzLqj7HtjftIMYKAKGmoqoahGghZSqO1xnjSRc5jClFAQAFBobywTnPjPFKao8ky\noiKiKib9gysML4eIhog/8JG8jKous7b8UV6+8RwtQUHApLPYZJy+ytmlMtf3rrAQ2gTmkKPhhP5R\nTrNeUF7bw8+HuL0z9L/PWJyZRq0tYeVrVJtn2L/2R9jNJln0DsNCJtjXyaQIR7rDpdYEDYX93gv0\nUhnb1xB9iVrZQMgy8lRBNiTKzRLzw5xETVG0s3hpQOFpFOG/hcaSlg7uE/nEYCsncFOH2W6KUolQ\nVY2ocYCctdFGzyAg0GiJFGlBb/YW64tPcPfmANXOscrnybIRpxtn2B/1GUcuy+UFCm2LnrtPq7kG\n78M/eOxHfF5/7Vf56I+XeLLzY3z5nRf58hfe4uzmSRREEu+H40Z90GKbFq4zp1K1mHoxaRLiuxmi\npGOUVNx5RGWpxrA/ZDr1KAqFzlKJztpjrG20CLwQ13UptUoA5LmAbhoU5EiyxmyyxeLCWSbTA4RU\nJM1jiD3EPEEXZey6jaGpqKqKKApkE4uNtVMcbHchKvj7f/V7vHr7T3jpytdoaDaL8inONx/hfOcc\n287rvB7/BgumxuFYZtEqgGcoki0m/hbFbY/aZoVCSZDD41uhxdWTRMGAwJ9wsPdttDLMZz6tTKaw\nVZYbZ/CKAf2dLpe3rvDkEx/j1t0dzi4abHdHyH5OXhGxSjaCYdLSq9zbH9NsLIA3p+tMaNQt3Mxn\nubbxQ5//B2YsNWmBQlQ4UV5DbhS43TJPPv5jvLz7VRQ5YlF5iO3dbawFlXqzxNtvvk2nXcNK2oSz\ngJWFk5SMMjvuAaVCYhIb1JZ1wn4ZpaozuanT2mjhZcffUr/4of+W57QvcuNqxulWzCgu+Eu/+BTb\n13pIUk4sqO/d+1/8rR/FkAtyT0M58tmWRYpin7fe8QkKgZ/9659jqb3K6tJZ2otriFmMbdcplytk\nWcZ4PKFebyGJ0K7XGM0TXv7ffgU/GHNwsM8f7t5hs1xDQCDMUwpVI4pzsixjHvuIyFQqJb78x98D\noFI2seqLjIZTbCHFtCp4Xsh04kKWkiYCk9EUP0qp1Croeomf+OzPkKcpCDlJLFOICZpVwS7dbwnI\n84z7bTAZAiKzcMjC6gXC8T5x7JPkGdHsCFUQSWSJkmEgawpFKjId9njiiafo9bv0+wfsb91AGeb8\nWOezqGoZQbZora0jpClPrXyOJ4p/h277Mv/wT/5z5rFPIrzM4voCddEkkUsoSZnWsogvJ+/H2CMW\nZ8RSBJkNZZFFbZkqI5IoRIwrDMY7EBVU6lXcO2WST2wzy1Jsy2CpViItmdTVCkIpZHH1kwwP7nLu\nRMrAcTEqNmc1hVhUaDQMRn8K1vEHdmZ5tPMMFe0Cd7pv0VGaiEsCB7ff4FTtIr357MYAACAASURB\nVNlcxosDSorF/s3XGCWHtC0bMdFwBeh273E42WfujyhnKp4+YnjYI3FVomRIzxlw4tRJSkqNYHic\nGOeLr/wzTq7D+snH+fL1uzx9YolLjTKfeugvkyg57wf8t03tPleknjKpSCyUfGzVomULVASRr/7q\nHyEUCv3hDnleMBj1CFNIYoGFRouNlQ6u41Kp1EizDEVLmc5GZLnC8kKLjy0sMQtCDqYT/DRlMHMZ\nzqZMwgBZUrFtizx+UGdWtRuoQKViYpgqURzgeTMiP0bIQbNUkjRmdaVOtVrlJz79k5BBkufIpsWp\n0+e5eOFhahUTRRHI84wCkCUVCplciEnDOYE3p7Z2EcNewrZbCKJCEs2ZzLoMBockUUy/e4dpmNAd\ndBn09kicEXeuv07ZrLN86nEkYwm1VGM03Ae5YD6aMZ3tU81W+Luf+gL/0ZN/n5XSBgtEzIMQWUyI\nw5THT32Canr8nEmiMt6XGCczKqrJYbpPlMmEWszRbIKeGAjoeKFDeaFG96qJKCVMxh5YPTJ7xD1e\n5igf8Ob4j3j6iU9Trq6iN+c0qzUqRoXqgoAv9/A43h7wfvlTPYsgCDrwbUADVOAPi6L4r/6iFHkA\n37n8JoZZxaxXudm7RpD4FB2VySwhzwJyp0wkGTRWbRjnxPYeUmBgCSaVtYscDW8z8uaUmnWmgz3O\nLT/CYdzDqkuYqsg4uMnplc9RNipw98EZxDrc5MCOKMYull0hCxu47kc59N+hCF00/UEGV1UFoiCj\nP/ARtQpiGiGoEvW6ShSB70WEfoButigKjdOnPsLMPWQ0ifDdOefObmKXy1Dk93Gp4ozSyhLu4R55\nDpfWV3mj20PIFZJMokhTClFAU1UMAwxJQzEfRPMUuaDQZGplBYgYTSIso4xIRF4UNOt1JCEjiBPq\n1QaSaoIAkdtjMtxhKKksn1jBLll4rofn+0RRSKvVQTcMJNFAEnJid4/YtFg5eZEiiYkjnyQLkCQF\npUiZT2f8D3/nl/jEJ3+c1ZVlxrtbZFJKs7wMsch4f4pY+LhTn7nT55tf+W0evvQwoqhSabZQ9Qon\nF87ydO0Zvjr5A2wtI8occtHjudd/A71wjq0VXRF4aOMSbl4wdO/S0J4A5RbekU61ZpGHEdOBT9uW\nsOoS3cMxmRRBtgbJIXlPY3GpgydMMKUO337+1xiJE9YWLxKnOX51gGxqRF6KqSo/1B7+VM9SFEUI\nfLIoikeBh4FPCoLwI9xn+HquKIozwNff/cz3UeR9Bvg/BEH4gToa4jm2RlcJE49O6RTn1p9h+60d\ndKNE7ouIUkrTMoiFMd34No3sYaoLjzMNdeaTKX6WUbYLcmGOopjsc4QolHGCgFHsoKbrBEGX/dnl\nY3rNUyrTocmTm8+S755kcCjwz775T+j1XQI8BPHBw6ooJpaooStVAi9n6OSkQUIeSOhFTjVPmU77\n7Gxf4803vkKWhUiCTanSxEsDrty4xtQJmHkJWQa6obP2iZ8gEyQSUSE3bZ7aWCBWIfBiUinHNk1a\nzRrtpQ1OXrqALD8wFtNWSNKYOMlJCpE8S4nTFN00WO5UCeOQIPbpdYfcuH0NVc2p16qsnXyEi5c+\nyubZx5kMPQY9hyJXsIw6JzbOE0URaZKQJAmSqCDkBbPhNYa9HaxyhYXlDsvLJ2k02tjNZcyyzX/w\nN36Za9eu8rXf+y1EFUqqSpoUBEmE6xzgeH3Gk7u4sy5SEvGtr32JYf8u9y6/hixGTLq7PLX2OU6N\nLqGaCzjZlLKscXLTh1A6NmdKJjGeRwznW8hZmYVghh+FtFptVGWOlEZUVjM+8tRfR/Z1GuYay+Yj\n5KQc7ixTqZ9ClKroxiKasAhljUqnxiQ4JEuPyP2cqSMgZDqJ8P+qzP/zGcu7BvOvynZV7oNSTbhP\nkfeb717/TeDn3v37PYq8oii2uY+b+vQP+l3HvIYkxggFvNj9E/b2X6K+cpobd28RZRLdvT6Hzj0I\ndS6qv8CgP+DW9hVOd9YYhBPONR8jzhzyWCbTp8iNeyiNCRQycljFV7bpjd/mZO24S+92PebjZX7/\ntX/Oq3u3iPwIW4T0tosqVLmx94Bu4OUrO9w6dLlz74DefMSLdzJu9kImQo5fgGiWUCWDi+c/jmGW\n+Zd/8jssLtbYaNe5cOoinaU18kIEEVQFwkzANG0W6ws0220WbBsLjaeX10iSnHqtztKJDrqhoCtQ\niCKdExfeG09cSJi6gTOds7Z6gac/9Gkss8LSQoM4yJiMxoRRztJSg+XFJt/51nMEnkMSTBEKUC2F\nsxcucur0BXSziqKXKCQVXTdQJB1JFEnSiFzIoJCZ9W5zsPsOEmBbFnbZxjA0tm68ThCP6PX2MXQJ\nQxIQM4moMBmOByDKJFmG0+ty/e1XSQudhc4pXrt2DWe6z5XXvk2a5cwdj5//2N9mdM8lPNKYuT5b\nWzIn2z96bM5iF/qTq6xtHFDaeB5PKFgunybzepwoP4xeVoiSmDev/C6xOMZPBBANonCGLBsMopso\ngU2eVMmKPqEoIbsGRabj5H3yskEc7XMw2iKRevww+fPws4jAG8Am8I+KorgqCMKfRpH3fuzLH0iR\nB9DfjlDLJwgmUz7z9H/CwewmiVBQW6wzTxyktEwRxiQHGrNLO0ykIwRJ5dq911CNgL57gKAtEScH\nGO5F4giO5l0unG/Q7U8oxW2EROB69vIxvVKm4xfbJPMRmSLx4v6bxMEm7b0JYhgy2gveu3dz4yxR\nsM9Lr9xm6GWopsltX+eJUxp3J31WNYmdb32Npz72MzQaixRCzne//bucPv0MS0tt8n91BEoKKmUD\nVYHnf/3zrNVaFM4Y10/QpAKdGZ956jw9IcJPY8TUYzxy2d3d56GHzz14mN0+VVul1TnHJz/zE6hy\nnWc//bMomk734BBRyHj9zW/ypd/+EvWSzkd/9Gk2Tz9CGrvEScqkf4/b996itrhJtWKyvHyJbm+P\n5aXzOP6A++jNBVmRIyMgySLz8Tb37uScPPUwsqgQxB67197CiTJmozlSqUTgzKmvnCaPR+QUdPcn\n5LgMvAC1tgBIDAa7ZOMJxfIJQhesRp3u7Tewm4v8yud+g88/93c4GN1jbVHl/37rD6H0YM7SyGS1\nU6VUnZKO2+x6tzCiBjO5QPaOELAwc58s0elt9al1ROY9gXNLD/Gme5kiXIaqgevcIRY0pFzlKD1g\no1HjdjfHEAbU1BZGc5GKYfHDyl3+PJQTOfCoIAgV4GuCIHzy+77/16bIA7jz4gRJm7LeOM1z3/4D\nxHZIyTYhl4j9ELNsIeUm0mbGNL6GKgs0lCojKcQQFfKiQFVsmKQcDfdpbOisN5aJgpT1codm9QyT\n2S3G4WPAA76Pd27s0W4LLFU69HvPo+fLnDNEJF3BUivM5w+iIWmwjZRK1BdbME/ZubfPwkINZwwn\nTtY4uj1icPWQlRe+xMef/cvImcDKqY8AGaPREMuyEeWCqlnmf/2Vv8WaImEYZe51D2iWGxSqSKFI\nqGaJdNqlU11CVMpItU3+0i/9MjN3ynQ+h//5twAw9TrT6YRnP/UQKiUMrYSgSMRRysrGOnmictEb\n8NTnP4ymG4jq/epjQ68Qp3NG0wmS3GaxvYYgq7hxysrGY0zmu1Try6RRgCBk5GlCniWIoowkFviz\nLsOjBcrVEt29A3qHQ771yncwpYzVM+cpGWVqRgm/CGgvrzF3XEbjlKeefIRvffMrBMEYTVIw20uo\nlRoz3+er/9dXWKtIUNgoVp+/9ti/zz94+fOslT9O69wuL+x/+b15qJ9McIU9Ej3m8FULNVIJbZHQ\nSXhHep0Lqx9DLRRGkynECUeHFeqLXV7f3WW53KFdNThyJxj5JjO3iyz12aid42Bwh7XlFgdXdrm1\nPSVOBaS/iGd5b8UXxUwQhK8AT/AXpMgD+PBPXuQg26dtBfTmMfE8phsFrLVPYJhLePEBprzE9d5b\nrDc2KfI5QqnNgjIhDBxSISS6fZ1qq83q+iK+fJOp71AenGQqjUiFGSPHISM4pteu5EwGKe/c/A7l\nkoQkeYjDEFuvkUoCQuXBmWXYH+EIBZuLMisrj1Gv1yEcUKuCmyg0NltsdR0iz+fw8HVq1WUqUpvc\n0lhYXOewe5ff/Hv/NR1JZLW5iqkZJPmEoijQJYnCsFBnJn4UoCoVyGNKZZPA7fG95/6Qcx/9OPX6\nAzjR//K//wdIcYAgyux1D8kQKOIUVRWRFRVRF7n46KfY3n4LRS5DHpELAggJ1XqVpz/x84i6DXlG\nMJ8QRj6pN2d16SJOMKTRXCT0pySpi5AnZFkMSMiShO8NiYqU1771h7xz4zKjwZRHOzK97buoJx5i\nOD+gVm0REFJfWkYr13j71W9QlBd5/fJNTrSWKDcMQGL3yjVOPfJhjvavIGo6ar9EY7nOZx/6LC8e\nfJ0d9/ibfXvvLcZ7Kg9f/PdY/tCLpAdrmLWMG6/MaVQ+yeHV66wt38d/KxYsXDfC62WcWbxElg84\nku/R23c5tXEWw1pAURKc4QQjsUgcidIanHxIYRDPEFOVy8e5lN6TPysa1gTSoiimgiAYwKeBv8tf\nkCIPIEkyFsw2CBIQUyopKErEcDbGO3IgCQkWjtAEiSQFEpXD67eol+rYKxt0+7uIdYmptI+a6giK\nSZQWtJQUtbLKwfwOlZJC4h1H65hNR1TkGiuNNbxojJ2VmQ59YiklDQM+9sgqfO9+biYIMvKsQBZN\nLizFdG8fceH8/bKb+cGMyQSUPOT//MdfpKH/LhdXqxRqjq0ILNpNbKvO6XIZu7JARVcRVRAzjVIp\nZx4EWEmKoshkXopVNpBVlSKJ0ESdw7de47FnfxJVecCeq4oiKApiAUvLKzjTCYKYUCQCiNJ9+nHN\nZPPMo/SODhEECauQgQjp3UrnIp6hm1Vku0JTb6OKMpZt89ofPcdTTz+LXV+iSENysaCIIqLEJ0li\nNMtGAq69fQMvnDKZjRhXF1leKmEZJpqkYFkGmqxRqzQoGRHP+3Nev/wC5578OFfefp6fvfQsV25e\nQ9EEzEaD+cgkLQqmox6iEnPLfYWVxjkWFzp85e0H+THfsdlc7OCOurizDMMc40iXOfXYjyMFEXfE\ndQLFopKOSAqNIBjizAsm1QHlqkIyUlm9YOHLNwh2bcI8ZeP8BUJvjlSZke5JjByV2C0wUPlh8md5\nliXgN989t4jcJ2D9+rt0ef/GFHkAM8ZokYifKhRxiFRpIg5D8iBDEyETVfzJhHazSZAXhLOYhVYH\nUY/pju/SbCyRZhFCPiXVVEInwHQNuqVDFrKIR+pPchReJa6ovN+5xLFGJBTMYpHu4QHuJGFcCJyO\nGoiSjuM/aEPuOSXWLR3HzxjcOORsdZGDywPqG1VEWSWIJtSrZVp2gSmLlC2FcsnALCROLW8g6xaS\nKmKKIlkcE4oKpipR+CmCanJn521yXcRUbZLYwzAtKmaVYB6iCDGlko3nP0CnKRBA1BEIMQWBUBTB\nqCIig5gjFRKCKCArsL6+yf7BLrKsUQgyiqDd92i6hYiAIBRIYoag68yDOUKeMukf0upsoGgqkqSQ\nSxpEMrWaCYrOi//yd5H1MVk/QZFypl6IXbUQ5BxNNXHdMdXKOeLUQRRl9o6OUMvrLJ1UubDwWa69\n8zaHg4yVjomtq0RmA8nP8aN9OuvLtHubvJG/Qvp9iDxnO48wiQ4ocYhea9KUTrO175K0jvAjjc3z\na0wHA6zF82y0T3Kr9xyTHYFGtU2zvsn17I+ICpm6dAJxZRvDTTga3sASA4rdVRRdRQlCNjd+irfv\nvfBvZixFUVwBHv8B18f8BSjyALy5iydJZLFMxVLxZ1NG4wAhDylLFZYWbHKpjFFYaErG6ccu0t3d\nISoc1ior2CsrDEZ7OKHHYL9HbbEGeoYaSIiOzPWDVyktSfje8XLvslliPB1TCCKPP/oZ0qBLf2tK\nkkkoQo71PsyqlXZCf5KhxtCVMiw7Y+/GmCgI6JZMamWNNI1xI4EzG1U2lhaxSiXieUDsu+iKgS5Z\nhLmPLRvohYTcaJGNR8hqQqRriIWMrEBRiFgimLKMoEr4qcDwaEC1WXtvPIokkiKiyvfL8NMshyhB\nknKKIkRWq0h5QR7nhMmQ1ZV1hoMRiqqTpTm6VqArCiCQZgmybKDKGqIgc/7CUxwdbqNbJqVqA0MG\nzSwjIpEWESQ+r77yCkjLGNU+68IJBMchjGNUSUJQBAy9hhv2qIh14iLmiU/+u5zf6PDHX/wn3Brn\njMYei60SFatFJubUGg0KZDRNJcoLPrr+U1zb2mN8cP3YnE3DPnWtSU5M4hlcHbzFytomclIhkX0c\nd48it1CiHu+8cYe5fYR+0mc6OsRKz9EQV5BFn6E3JpMTFMPELFxkpY0o9ZiGYyq6xbX9Fzi1dJ6v\nc/sHrtkPLIN/sbOJXlepVxVkTUEWWpwoneLSIy00WwBLZBxsMRETRCXkYLBDqkaUZBs3jZiOA+4e\nvYASKJRLDdIsQPYEKvYSUcVke9pHdOv81NpfOaY3PKxxdCvA0hVKZoW9w12qikqWRUydMUL+wA0H\nbkrJjLEXM6qtFFUWOPPIEodRTPVd1JSqXSbLYuq2TrlkYmkazXYJpaISiwlTfwAxuEHATAjYvvEG\nXuCTJ3BiZY26baBpKgIFYeATuXOKLEDMEqajQ3hf3L+gQLqPr4+smiiyRsmwEHMJUVSBHIEUUSwQ\nJYi8EUIypChAkUHXSmiagWaYqLqJJEEaJ6iSzJkLj5EkAXHgkyYhQTRn0LvH2D2gKGQOrn2Hxx86\nT3//Bu3FJj/7cz+PboqoCORZgTsakEYhUq4RBA7zoUs2dXHnAd3dOdkspCQJdDrnWTq5wc7r3yWJ\nM3JAFlO84QC5ViHLDsm12rE5M0slEqZIRoOaLbHQEZhO9knFIbm2hxsPSaU7DASf8smTnF9+BmO2\niGKLCPHbHLhDxkWfU/UnCeMpjrpHriQYRpOICbXRAjvdkLIO94O7P1g+MGNxPZeyFqIay+SugtoY\no21uM8j7tNYXySSB08vnsSpDskJCtQQUy0crSSyvPMHtWy/Qys6glUwEI8TwK+glDXeQI4wiFqwF\nMtFnheP93PJil1YzJVEDBs4+mlXhifoqo/GcNBPY6w7fu9fzHHRJoRBylLRAFmSqpoLVLnM0GhJE\nMf48xFZVZDLiPMSfehRJgSpWmPpTHM9j6k3YP9rm9uU3iUUJRZKQhPt4xQe9Lr3+LnEeUYgwno+Y\nzUdEiYsoSSTxgz2koShE/pQij5HFHEFKmTh7TEe3GR3dZDa9S5SFUIgkcUKcBwTzQwxdQxRzJFFF\n0wwkSUZSJBAUCrL7aPyCgqooZEDqB2iagVAIWKKNLBSk8YzV9Qq5qqMIMrPRPv/h3/xlnChgNNhD\n1Qz8rCCKHNRCodZqMx532b5zF1VrMA88nvnML9BYaHPn9W+iSDKSIWHUBaRcINUUZr0udtIhmx4/\nZyaFxzwpcMI7XLt3mSIoY+g6Uz/ED11SIcEym1iqjDINOOptoRsy3qjMIBoj4ZAnOW8evEhDPEc2\nb1CkBp5/QOJr/Mjpv4KVaBS5Taodb2k+tnb+v1n6//pi12sMJofUFA+z7RMk0JtIiLrHvBiTCgnj\nSKZdrfO5p/5TvnL5CxxMB/iegzd4iXKtTuAmCM6EaC5TaGOW7XWW1jMIKuy8MuISNi89/yIsPtA7\niQ+JLZti6DDfucHpSwrRKGZ1uUWU5OjCg2jY8soCfhShipCLIogZSZKxXlc5mInoeUzbLhHHHrmc\n4DhztFRElGrEmo9UqKRxQK4ImHoJ1TSRhYJMgiTP0FSF02cf4o3Lr+IHMxwzJPcjkrxAVUU25ZAg\nnL83Htd37iO/eCN0VUMoBCqVFQLZwXcnBK6PJMzRmgaKYpITM3MHdKQMSbCgSJAwKIBMkBEREWWZ\nuMiQYo9qY5EiFyiAtMixrDIxOa++9FsM7h5ytDehYTWZz3tUDJ3f//0v0JQDPFNl984Wpeoi1skW\nbuFRtRpUyg1EsUDSCj7y7I8y3HuTUX+HhXIHw65BFqDnZYyFGqkoIYkqp0sn8SoZeA9aGWaTHuVy\nmzjKMS2FWX7I3HOpG0t4UUoeRYxKI2yhhiN5GJmCpS6y2EzZ8W6gVktUU5Nqa4KfFawJj9Gf9hDE\nELv6YbrzQ5Y3NxHTiDQ8XmrzfvnAPIu9YNIs15GEGL0ic3DNI5zGaFiEkYMpi6i6RJOTvPbC79Dt\n7rBiXAIhIzncQ40ziiJgzVpntV2nXlrjcDQgUd9g7u3TXAq4Oxoyf9w9rldbZblZRhTq/M3/+H/h\nFI+wfeeIq9f2uHZrj1tbDzzLZOohk5NLAoJ0n7fRcxL8MOVMq44/T+nNXGoljeX6EpKogGZh6DZe\n7OOl3n1OSKlAkWUSx8OPQvI4hTxFTgSW9DZOs8VRJtIdBQyShHmU8XbX4eDOawxGD7YF7ryHLCpQ\nJMRJxHx2ROpOSZI5i8vrbJ55DEHWSIIIQYRCKAickJtXv0WlXkdTDZI8BlFAFIT7lAsUkGTEgUPZ\nriCSI0kyceAhSip2c5H+tSts741Y3FhivVPDiSvoFRtdNxGkCE2UcCZjgvEEQUhI04jB4C5n1ioM\ndo+oVHSm+99h371O5uaYCw1sy0bTqyRCgaCoqKIKooJs1Ij1ybE5S6KEg/0u866H4+locxU9LZDi\nEooscbbzKcQ4ZjCa4Hv7TMMBB8OruN6AhWKBmqgTIODNSmhhDWd6l1rJQ1ZVTGFGlExoim3Kuc2C\ndo4fJh+YZ3nj2ndYbK4TMsNLepx4Ygn3ts3o9oDGioauREiGyu3wKn44w5AVRkEXZ1IQVMus25DN\n4NbgNpIiolgKhlzl7VdVNOGQem0Tq9Zguf0IHD6Ab9VShSDzMSsuv/OF/4bOQGU8i1B1jWrZ5D7N\n6X3ABEuXSLLsPn2DCEQCCTmBL/HjT67ypfk9pllKWSvRWXmY+a13SByHg8glUySikY/cbCAWCkIW\nEuYZRppT0k32Dnus1GMsVWOl2uYAGE2nqG5IGgU88fRFDo+2ScMH/TXbN1+i3FinVi8jSwo5Gvfu\nXeHCQ5/gaO86gTdAN+qIdhsyB000kAyNwJvy1su/T7u5jr6whm3WyESRgpwsChFFGSGLsKs10jAG\nSQJksjzmu//i89RWLrK1/xbbd/rotsaSNWfu+Dz/6mucXzYxSwvU22VSb8545mLJLovrj2LXBOr2\nDeKkwD75LNnWLlonQY4DSvYqummTZhOG3S306gLefMbXul8gzY/XZzXMU1SWmwyHPe50r6DWy0SC\njVZ0mXkOw2KHyNcoGQZpYZEXE5w0x0xF2o0OkpoxDXfIRIEgDEhT8H2ZVkMhElLy6hGmt4Eituh5\nxys+3i8fmGcJu20cxwBpAVmOyHIRR4rRSip+MUdsOkReQqHKGEUbRTQJ4gFilLG6ZiDKOpJo0Fpa\nIZSmqIFCmhRUrWXWao+jhQaz8ZBXXj2eYRKVCgUZgrvCktQgCHxUUjQdKpaGLD8ALEhyUCUFQ1JI\n0pQgz0mSgqzwKdk2z17oIIQZFT1m5/B1mo06K2sn6KydwdRsGq0OVmWBLEop6watxQXMkoGIyrwQ\nuLV1h1RN+ImHPoStKNRME9E2GM4iVk930MSA7tGDyEwS++ze/TrDw1vMBgf0d99E1zIGw7u0V85S\nrq8iIhD5hxBnpHnE4uIZ4ixjNhkwHB5xePc1urtX8b0eCBlFlpEVKXlWYCgGiq6jSCAICUbJJIlc\nTCkkFWUkMSQJpryz5TN1Uxbbq2wd+hzu7zKaDojyjNlwjKLX6Q1vMx+PeerDz/LURz5Lx6iyudyg\nXa+y2G5iKCXEwsSorFOtt+kd9Jj0t/FEBU84vhXyAo/J7Ag37bOhnyMSPBQEyCxSNyWTPfIiQJQD\nzq48jqmWkFSVvJKhaia7o+vkooCXZ6h6TFxIeE7CvcGYoDdHHq5iGRp6GHAye+iHrtkPzLOc+fBZ\nCu0G1eAUE2mZqdvjmYc/xeWjt7DSdYYHt1GEEKYJeWCycmaF7N4Y37jFfJYiiQYnL53hYHeATRN0\nkyzJMeSMWB+ixSaJVkCc328WeFdUeYpmVEmtglZcoi86KDUBTRYwTZn3pVlQ392qBFlCLimkUYCf\nZkiZxmR4l43maf72L6wwC6ekRUKvv4UVqeSyhmTKxKnLyPdo2cuIRYDvR+imTdm2EeYSb+4E1G9c\nR5RVTm9s8OKbN5HzjE6nzGy4S9lcwMsfDN73h4ReSP/eFXxRZf3kk3QPDrn1zps89pGfZnnlBIPp\nASW1jjs/RFFr2LU2rXCN7tE9+qNtLLNEHse4cwPTrlOrbZBGIQUxOSaaaiHLBWmU8C9+/VeZ7l5j\n7gnYZoNqrY5SqvDxT51lGgZ0mhWC7CPsX/0G5Clh1MN3C/qHAoViMNXG1K1FyCJOnz1DFq3QvfcO\nZx7/NFtvfAdvdhm13SYRZa5vv8xjj3+EVmAT5AYE3ff+771792gtL1KqZYySGXEu0DGaXCrVeSUx\ncUOXU50PcTTdY/vadylKUFIrxEnAgfAmiiJyEB1i5ivMsjllvYxVsnHlPuW2TffObWbeIlnukvWP\nV3y8Xz4wYyniEaXSJgNvj6jboN5s8b0bl3nodJP+EZBoNNMmvhxSbvgMt7eo2W3U+Azbw11UMSFr\nCHRv7lI/tUxUDEndBLMhs393ilp20Q5qFHUJ3teiMOunSJbPQnPKfLhIyVCoVmsIooQXupii9mCM\nUgqqgSRAMI0Is4wkEshJePH6kI9eqpKnOnGW0O130R0BsaXfnwxBYfH0Wa5ff51aXUdIYcWuMwvm\nDIddhnGC2unw9eEM9413OPWZnySMX8IPcmIn5VsvbSH498jEB3mimzffwdbLjIsM3VJ58YXf48Tq\nBep1g4O7z7N76xtY5TZ2fRHdtIkCH9kSWFw6S5FDb3iT/4e5N422LD3rODIOBQAAIABJREFU+37v\nnvc+83znuvfWXF1dXV09S2qppdYswBYQjMGATTCBJGAS7ATI4A9OVsCLEGdlxc6y5YWBGAiDBBKD\n0Cy16G71XN01V915OOfcc8+4z56nfChB1wXahuCs9vPtnnPWfd693/3s9xn//4kdEvkBpqmTRA6G\nXkAVMn7ioWpFNE0HKSFOAn7zd36JvNoAXB57cJb+eMKZY4vEQUBeNcktzlKq1Mkpgq0rn6feNFB0\nwcH+NopRQc0Z9LY3aS4tsXXnNWQ5Ra3PsrVzm8LyaaKewZVrLxImYz76wb/PcxufxIkTVPVoNqyx\n0OJgZ8jImzJfvI96mGfYddhYSdC9KSeXH+XVnc9zauHd7HObsXeHXFpEkkyykYJWr5LfPqCmCgZy\nCTkxGEchnpdQMi0sBkyTNmdX3820FnG3b/jPy9tmLKZhIccB2ThGKRnkNZUHH1lkuDtALcVUJ6eQ\ny3D27CMsqnNEgxGymmc63WW4N0SvCNZH11g8uUQ72ORYfgVpUcFQDUq1GGUas5/bA/+opznO2jyq\n3M+2C2U/ZjwOGHnbZKkCEoTjN6vHbpCRV1L82CUJFaIwI8tS/AiGk5ATrbssVa4XgpYndm26OxPS\nrMtCscqkd8BQhKR3rnOidRJJztjtdym4gqmikFoS9cIyu6nJe1aXyZcWyLQD4iBFEQa+4hE45p+u\n541XBzjuPorQibIp73jyPnx7SJjG1GcWCMKAbNAj8EYgaxQLNQq1JnJhnlprGcMwaLc3iGKXZBrg\neS6u51OrtlB0AzuFev0U3nSfoN9hoXAMP/VJpTKqKqOZPofb11h4/AO099aoz5zEMFXOPPggWxtb\n9MZXyZVsjLJBFk/xfA8hhextXEeWdSQJ4ngDTYuYuAGjJCZnqZyrP8pB+zLbzia26FOXVo/s2czs\nIjPVJWQ/5XLvZTZViVp9lsiLEQ2DW9efZW5uhdSJcMZdHEVjXpTpeBsIoZIcxsSpiq85SEHKQRQx\nW9IJAsFkaqOoMoqhcH39s6Rvjd769hnLxrWbnD15P1ahxHg05M60TbpbxJQsDKVPmk/Imavc/MaX\nuW6a6HqEODTwI53CfImZ2v1caqzw0uZLPP3Ix3n56nV+5Klv5f/4vV9kJ3mZueJFrFrMZCLBPQmx\nan2O33v1q9x/4kFGw0OQ5buV8CRF1WSc6E1jyZkmWZzie4IgDvHDFAmFTCSkQuWrb2zx4KkWwouo\nVTSs5hyeHzC1I7qOza4XUbYM+k4Hz/PpeB4aEeuLq8giI5eAYmikCvzyr/0af+dvfQ+f+Df/F7Wy\ngucPUDXtCI6VIkuILEFWYqJU4+tfuUbCVSpFjdXlCpceepBytYrre9hTm/Ggj3awTrlYo1xZoFCZ\nZWH5LIPeJtPxIVHsEY9DJr1NdLNCdXaRYnGOYnmW//2f/H1+9D/7m/zh558lcxyQYkSiMM4Ez335\nd7j0yLsY9ddw+jqrZy/w1IfeyZc/Z7N1u02zYZAKnSQLSXyZVItQhYafhBhGkSyVsQo6OBrf9q7v\nxc0cejsvsx5tI6EwCm4fiaadSUjgD9FbeR6efS+drSuoVkgQWyieilKdgmKw57xBYCnMaUUOD7o0\nKwtseK+waH2E2myf/a0+1WMN6q6JbOSw0SkKjd1gjWyikUoy8tFOmyPythnLfGMORTLwbIlUsshr\nZabDCmPVJcs1GXf6aBUFX8QYsyvIkcvG/hUWZpcI0n2uDUY40R770uv0XnEIwh1+9ndvslqv445X\nORhcxWzMkktMYP1P9RpCZvXUCWy9Qy7VkaIII2dSLhTwnAgaCnC3iS+NAyRVELgpSSQhREqYRNjT\nEFNVQNGJMsHqiXmSOMZxQqa2T+dwwlKrgS5H5BUNVzIZhRELtQJxkmOoW5BGyJICIiGVJGTd5Hf+\n4Pf44R/8IX7+n/0CqtAwlJg0f0+RLE7JmSoZKnIY40g+apKRRhJbaz693ouEQUQipjQbDU6dWqBS\nO0MU2vS7t9nfuUySKmRxQCpLiDSj3z3kk5/6IkvNOVLFZWbGZNAPKRSbbO7sY1kq+cYKzmiXSShR\nn19ESUx6gyG/8hu/y2MXjoGcsraxyXBk401j9oYuKnssLjawchYik9EMDUsr0W7voZSO89jF99Os\n1MlbRcr6DF9d+9fIUpnTKwqaVOXTd96EcFVJEcUcWpBjL32GXOsEXr+DqlrYbHCydpYgjoEGph1R\nmF1C1LfZ2LdJikX2995ALQYYVY00SJj4PbxwjXy2yA5rJLGJO83QFIGn/n+cwf//U7b2tlArdfac\nTZTYJZdrIkyBkvkkesby/BJBkBHYMs3Dm2y5u1TKLQ6DdfQwh2rKROiYUQ0j8TEaDZrWY3S9GwhZ\nIonnkPtFhkGHextJQ5FQnqliKXOIWx3MkoVhaiRJzLBvk8pvvtJ008KdTPH8mDC+231MMkWVBYqc\nkSSC124eUMgk7CAl8D1udjzeebZFq1FEEiqT/hhXyji/tIzDlL5aRs5CNDWHEALfD9DUu+Dcg9jj\nC198hv/qx3+cT/7mL9EZdnEP3szOaVJKjEwUxagylGXj7heKIF8wEXKKkiWochFZEVy/tc8fffEy\nUpIh6ybHFy2CVDAZB8RhhGwaTKdT5FyOC48eY29jk1vbDq2CRrc3QLr8Mk888Tg3rt6mUCiztd9h\nXtEJ0oiDwx6XHjwFZKzfWcNQc5w5dpJuf4coDjD0EuOxR7V5kkZ9BlVRufjI+7l95QpaTuWhh5/m\nzrU/pu3EjNbfYE0dUC2YZLbFzXH/yLMiiwydIgfuNqaskZg2crmMEe3hRgEjp02YxISRg9BVUKqo\nxNRbMVm6RJsdrCjPdOhQmikwERlpKChUqpTEPOvel4lI6O9KHD9u8lby9nFKWgnO9CZKJpA0mfFk\nhFGY8PTZv81sbpFSocHJlsGT7z2H3lQ5sbIACjTlVcxSnrxexh+3KRRXOAwixl2b7eEVwlhG6GWe\nuPA0d25fJVdtHtHbd18nHPext8YYZQMpp5AGPoO+TZpIqPckA6IkIcskBAo5WSKJp/hZjKYJFFng\npQmTVOYPX90hUTNqeZWnT7U4eXwe0zCZTGwiOWHZqqBoQKQySBKsQp0wcBCyjGYYRN4USchoSOwM\nerz44ks8cN9D6ImOpb+ZcEik5C4rsQSyKqNkAsvUMA0DS1XwQ5e+5+M6IcHQQ41Djs3VOXffDKdX\nmnhuTGu+TqViUp2ZIYpkZqotGvkK3nTKqbP34ToJL94cstN32Dnw2bizw+2NPXq9KUsLK1hFCV0T\nlEt1zqye5/jqeZqteUxLQ8nr1OozzM9VqOQsioUC3b11bMe+2/qvakjorPcS/rsf/wHK9RNkVsLC\nw0+hyzn83pRDN2CpeDRmceSESdDjWG2VWIE4zFAzk0HsY/s+A2+AmwrcLKGQ0wgTh2kwQbHKxJMO\ns+SolU+jZTr7nR30SMIJAupeQvvgRSylzInZJVoVg+7mW88xvm3GMkoStpwh1apKlPgYahERmPzB\n5X9B4A4JPIdbB4d0dnbobxbJojKKnuBmU+zpiGm4TyzGZIFMTEYsFfG9LmmWUDZO8dxXfpX6isWx\n2tGK7LHmx7n5Qp84kyjkClRKVdI0YRol7B/0uRcJJ45jwjhBlTLSLIM0Q0klVCFIENwlncoIFIln\nrg1wpYSKZeLaEZPJXQauaBiSM/NkvuDVyZSDyRamnEMvloiiGJEmyIpKkqRIUkaSpNzY6hAh8T1/\n76eZbbx5LOpKhiUpqIqEJEI0U6Wgq+RNHU3TQMgsNerUGzl8UlIRk2UhhVyRfLnE7MIs3dv7FHJ5\nTp44hyoJtvf2OXGsQr55F1Dd1BWqBQvXmzJyXa5u7NCYnUEzYg46u7zy3PMcHHZJ05jZmQZzc7MU\nSg3mjy2z0GphWSppoKHoJoV6jfmlefKFKma1SEZIYTbPKzduUGyucPvW67iex0udZ9GVMkqhiJ4X\nXB8epWOXkAiJubrzMs5YEEQpWqiROh7FtIIXhBz2dsilQzwpYDBeJ8kSstBjfS3k1tYe08kuo8Cm\n34P9fpuKmOPZg2+wN/YJ45jJgYSqGLQW/yPEOs7RIOxKTPYcpFGe2uwIVxlwYfF96NkCmV8AzWbH\nv8a4cRVnkGe3M8SPHVw3YTKJiU2FtfbLFIWFoQ3wcfCcfezJAHXmGM1qiavPfeWI3tm8wXseeQ8l\nU5BELp3uHre2RghPUKoVkNI33Z4w8Ei8DM2QAZkkk+7OqglI0pg4zfCCkHyujFUwuLwFl/t9fHtC\n4Ic4zpRxFnC5vc3Xu+sMdEhFgZu7r5PTmoRywmTSQ8gGiqyQZRpCCJBVXr6xiz/Z5gf/83/8p+t5\n30e+i4c+8BGUggmKjikraIZO0cohqRCOEzRLot2bUimayJlCwRAcDIYUCzlkRWP14iNkWQFDU6gU\nLU4slrmy1sYb9UnTjDhKWGzoqLKBIhTiLGVnb49uxyOWFBAKOcViOvxmEdEdYeR0kiTCDWzmF06R\nr+TRrCoz1Rb5gkVOl5CjECnJSIKYM6UhW5tX6Hkxh77Ns+u/gWNHlAszTD2FvN06smdTzyEOBXEk\nMRhEBJ5MZ3TAwI2YTCNEqJEvyuwNbOxkSGzoBIZNJHnUFqqU6+cYxwoIlZykoOsme902/iDC9AVh\nKiPkiNhIKeb+IyQzGu3YOAcenS2J/NwMm9ddgq0SegaR0qHWkJFESCoVGdw0mY76nFg4z7Ha45Ry\nMgvFUyR2hplV6PTWGU1lJsMuTtckyNoE2ZS93gFzpxaP6H1p9zNYrTqplNE7GJNEGvlSjtJsidC1\nCe+5JXIkwEgJ/ZCEiEyAKmVkGYRRhp8G6JpCzoDZZpkD1+Ny2+UXv3GbQzPP6qMf5Ph7v5WD+QZJ\ndQEtjJG8mMiJ2e/dZqm2DJLG+itXCNxDJF0nl6ugaCqqKvGZr32V2H5zKvuhx57kwx95D9/13T9E\nNVWRFRPNstAU0LOIE8st7MOQmVqNubkVZpZnKZZrGHrEqLuDnMkcm50jXzJRdYtao4mmWeRUhZ31\nDptb+zQad3u+lueLjKYeS80atXKJki5R1HUsRUXPKbRadVQzR+SOyBGjJDH12hymoVHK16nXyhQq\nFnmrRq5aolquYE/bmHLE3PGHaVVP8OADj+DHQ+xwQBbF2Gmb0PZJSkd58mIxoWrWadRWOTF/P05/\nQLcz4MTsI9hOxmE3gLRKFsoE0wktq4Kmu3TtDjQUZkoq8dhBTUKyOM903UXzJCyrSJoX6HZCRa+S\nBIL9wVtzSr5tAf4H3vcevnH5efJWBQYRM+dVcvuLjDmgUjxNt38HO9EJ+wlFU8EOfaadkIExxbRa\n7I7WOdU6TTve4MnHv5evvP6v0YMSrj/AmXSYLx6nbadsT68d0fuJH/sadwbr/OpvPY+ZV4g9yJkK\nSRiiySr5wptBS0SKmmZIlk4yDMiSlDgTiAyEEDQqZSRVp9cbkoiYhZkqpVKe8WBEsWwwu7qIlGR8\n+3s/yB997etEvo5uCZIkYGwPCBNBqT5D5h0ijdqYaoJSXiSjTOj1sSyLX/v9r/OBb65nZrZFHAc8\n9f4znDlznm57F7Wg8blf/ASGlSMRCrf32hhykcPDDjPzM3S6PcqFCqms48U+e7tbaLqOZeaQ0gQv\nUXnnu97BK288jz1M0LQcc60aRveAcNFkPDnk2OIi05HPfrtNo1xHUUooQsOySpTNeYLEpdmaw48C\nJhMb3dLRdAVZgGxpmIZGlvm4k0PizKLWqPDAQw+Rt0yu9r9IEFaQKj08D8qV8t1p1XsIuCqljGGn\nzTSYIjcDtKKBfKjjeBFLxyOCgU7FmKUwJxG5MuEkwrKaCK1JGO0g63lEqYMcNJHrDrpew0l6mPN5\ngm7K1IGk2aVg5cn7JWDnL3xm374Av/IyrYVZonjAhIBwt8Ga/McM9sc4o1v4mY2eVtBUiVargGTZ\nNGYDDKOPlE040VhEnxGcX7rEdGeH0a6g2TK4tHicxco5DttrlIqzKH/mEj/+o+/nf/03P8d89Smy\nVCZJU1wnxo8dqq0q2T1ANWkiiFOJKIjws5Q4TUmTjCSLGTkRThAxnbgUczmqpQoFSyWvp5w8vYhV\nLKPqCbquoUs63/6Rp3jn42cIQxkkECHc3nqVcmEBtbKEkrcoWDV0d8hsMWSm0UI35pCtNwHnpCjF\nVPOkUczcQotz91/iwn2P8uEf+C8xGlXCNCIxijQrLVwv5eBgSKk2Q5il6JqBqRbotjvkcwWQ0ruJ\nAjmld7CLrjcIY5VqCXZ27rDeG1HTJfK5Cp6XYugatbJFuSQzOtwk8BMyoZJrNmg0FtAMg0qxhGXo\nzC2uUixWyeVrWIUq1VIdTShoQqe7t8trz3yNSrXC1trr9EKb5RWLsZuhZAlj5wAtPuqGXbm8Ty/s\n0ygtU81mcekwd3yeer2KpiwRN316o9tYhUXsLCEtHlJSzyAlIfXSKXKlKkvVJYQmaJXmUWsxVmkW\n4jlUtUr1tGBGOYccSeylbd5K3j6avOw4Vr6N8DSoegRBmZPa/cRouOGQJC0gazFq/jTDgwlecY+9\nXpdTi1UiV6NSWIADjaF6i62DCcfPVNi/0kN9YIedGyMS3WChWkaa1iF4M2c/u9zkTGmFvGHQTyFJ\nYvJ5hTTOoSgJ0T1U4KoiSIDAT0njlEjKSCMIfYXFuTKVgsHI8YhTiVK+iKFLOM6Qc2cfRZIF48M+\n1VqDOE7RMpMHzi7RbJX4wjNXCEOXOIvpbF/j+MnzbF7tsjLXQNcL6JUqge1jJetE+TeZyxRFRVIz\nNGymoztY+bME/pSzZ86wevyn+MVf+DnK2ohyVcEe5VDUHImk0ZxbIUqmjMZd5leWyal5fMel1ajR\nHzrEkkbg95mrNqnVFE6fOguqxMvPXqFgGkiKRBoFFEt3U8+ylMMJJxzL51H1ArVGkyCYQuwycn0k\nSUIzi6TElCvzpIqERIQxs0SyN+L9H/44rjLBiW1KosH+zjZeOMJJBZY0y8i5CvcMSx5fvB/f8XH8\ngES4IId48TbHyk+QpQli2gPjLIejayzMnKDdvY288ocsr/4D3FGX3d3LfODB72Pi/t/sDzeoF07D\nZIjjH3LmYsqkJ1Gv1zFllzj9C8GI7j6z/yEN4K8ifuOLLC42MfOC1DlGdjhmsz3ACX0KhfNksgJZ\nHpGs4ycdDLdKkubY3gMn8RiuvcGBc5PedJ/5OYnIVUhLEbbdJ9/MU6nkkKUOLalxRG/oKPSDdW7u\nfIGJbePYU3q9CY4zIk4D4viebIiiEHopaXAXLCL0E8IoxiMkiiL2O0P6tk2zYHHyVI16OeHYsTOU\nFxaZmWuwsLCKqVhowmB4sMakPaCEy7e+8xSaaWLJOo4aEyQuVn0F2cohlw0UoVKdb3Hh/CMcq76J\nNhclE1I8gsTBsuZJkgGqLCNEiKZK/PB/89OsrC6jyypzS4tUanXyxSJxCpZVwXcSIn+KF/kcdNps\n7B6wdzggCzMefexdjIIBge9hD3ps3OpgGBnDUYDtBCAlVGvzyLKKrGsEfsDUnhD4HkLXKNRn0EyL\nLDgkS0N0zaRcbiKpEioZLhEyIfmcoL17E9VQOfC7LOYbSKlOs3qOGWuG2YbJjFg+smeBK2PkU/Rm\nF3wJa1JjnO7iTQ5R5ApxdoJHTn6Aj174KSR9wmrjAvakhqf+NrVyi4+89++gaQ0WazN86J0fQS13\nkWsBjdkqsnM/unaStfaL1Jcep24cLTXcK28fTd7WO9i5EZA72aNqyuSPCc42TpBlCmE0BNtntnic\n+Qs7nHnY5r/40E/wY9/xT6hZi+yNRlz299EzA12uMhjsYacOtdostUqB1ZkGQegyCt6gPT1a4FLr\nE272Nhm7HYQkkSQC3dBoNCtARnYPNHMmMqQsxU8j/CgjTDK8IKVYyDNbr1EsGSw2Zyk3Kzz/3Ovk\njTJnz5+llEJBy4MSo5oRat7DtCoEUUQYQL+9y8fuX0RVIHZ89rZu05w7zuWbW7iRjKxJ5HJVCtVZ\nFk++CT8bu3uk3h5yGkASoWsFFAWk1EPJXHRd8BM/9d/z5Ie+l0ZjlSDwcKcBURQQhinnzl2kUFhm\ne3+LfL7A2VP3USxWmbguSjRivtFACBUzbyG0BMMsYXtDpNil0jiOJJtoZgVETBoLRv0uSRLdpX5Q\nTcqtM1RmT5JJgjgJEQjIQCJAQSdOFBq1BmG6T6u8zGsbX+PFvaucW36E47MnuHT8aWSRY3a+dmTP\nnrz4YTStyvo1G83UCRODbi9k3NkhDDzCrs3a9g52MISkzGg4pO5dZPPOmOv7/4wvXP1lXrF/mTuT\nKd/Y+Ao5Q0IvZ1Qq8/TsCVP/AFWq8sruv6RZeuAtn9m3zVia5TM0LEG2f5wbazeR4hlsX2DmNTLp\nAGvJ4sH7VghufohweB//6ks/B6rPoXiNs40nacgab/TWONjJcN0CFgqZqhJFBju9GD0u4g9nGfWO\nzkbMNCREquFHIAkJWaToRZ1YJERhBNGbMYuUpQhMVEljGsREcUooBI1qhWI+REozNF3HnU5YXVhA\nNku0FpZQrRyIED2y6Xe3cfsOthPSs6fsdyZk2jIdR+W++dPMlmdx45j97g4iV+CNV75K92DA0B4y\n6m+TK76JfivrFonQMa0ailYlFRFZqiOrOSRJhyjEVCIee/RB/vb3/yAf+/jfZX+/y2B8iCQgETKS\nqdAszZKlMlkic+H++9BzMn/45ee5cXMTQwQ0SnXOrMyx1e2CmkfOF0nSAEU1qDVnKBUqqJaF7TlM\n+gOi0CNNM4LUY+XUEywsXWDhxAVqMytoRg61UEXoMgZTkH2On34Xum4Q+kMkzWG7t86d7vMcOmsU\nikUU/Wib/HC6y5Onv58nVr6DtbURzfoFKloNtVgmJzRKtXn2bj3L55/9dW5vvoZWVBk4DjmOU9S/\nG39k0V3vIGcDnJ7E5hWFLI3YWd8kp+kMRmMMpURJW2B7cBRZ5l5524xl96DDerCP4xV58JH3Uq7p\n1Fsy5YKPLreoVYpcG/8xc/kcv/B9nyIvn+JXPvUJgo5FvC8zHM1z7uwFGrUqrdoDCKmE5wRMHQ3f\ncbGTAcVkiaF7lNd8P96ncnoTz4U0zUhUFVVT2N/eJ4ljguBNN0xOIUwC3Dhg6kb4ccZMowZaTCmX\no1iqYU8mlKp1Es3k4jueQlVkUGU0q0ogF7DHY9Y21tjYbRPERXa6ATudHr3emFiWOb40y6OnzjHq\nd8mbJWRzjmvXv8TO2hX63RHuPWPFQWQgpXmmw9cJgl1EqqJrJrpVRcuV0cwyaaCRBSALOHf2DP/0\nF/4FgeOh6hKmamBZRRozM1QaDRRdJXDHXLjvIo89+igPP36WVK4y9lxuXLnFG7f22Gv3mW9WOOzt\nkQkZTU6wCjUKxTKKbOIHAe54Sug7EIOm6xjFMoZZIBMZQiQ404icopBEHnnDJFcoYLsHGFaJvFbG\nymWYWRlJrpJLFmm23nNkz8bhG2z1vsbqSotHTp9DFgMePPs+DHOInzjs9b+OV7/F+FabQlTEcGbw\nE49b1w/Y7FxFN9tEHmjOPFa6SGj5NKKzaJKP6+2gSXkOBlsEkylq/NYo+m9bgN8d7KIGNYQ+4voV\nm9OnBZX6HHsHV5Ezh0b6CAfjW9zZG/CF9U8yzF6joV+k2Mzx4JmLbH3+Gq//do/CCYO5hSrTSUCs\nCHxlSq6Swx54tJN9TlTO8Dqbf6o3GSsYJYXGaZN4NyTNBPXGPJPOhCyDe2GbP/HptT+/8P3Bn//s\nhW++jX7j83/Nu3LvVOfX/9y3QWAjyQm15hPEQRff9hkfbqObORRTQ9IMZDVPiozneOimjpkz+J9+\n9hN88rd+nXJNsLvZR7Nk2lu7JGnE7Ru3ab7rcRx3xPnT9/Hss7/PyvwlvnGrQ9HUqC/nubPTZji0\n0a0d5mbrFIsNxmMbw7QYTw4Z2QXkYZ5mcwHXdZEklYwYKcsoFWcwtCmhN0U2CuTNEopp4vguGztj\nzj90if0715HUgEF6g3xB43BPP3Ld/YOYrLbJaLjNwTTgwxe/lSvXtpByc8y0dMI05XC7zzs+JHhh\na4uH5ucIBjaqLDORbqFkOlIhz6C7hrXYZFaCqlFiZFQYJgNIEgrqMpFYx/Drf+6+/4m8bSfLIEzI\nFQSGWUMWgtvtNlfufI79kUS3M2DcdWlvucjmVX7t0/8nvqMyjbdIwhZfuPwpLp6/nwfffz+t2VmG\n4ykhDtUClPIa1WIdJInUifCCozFLWX6AUaeKNxKESJiygm6k6JlKkgkyVX6LFb/9ctDeRlJV3PE6\nkWRh1ZYwC0VMy8KbjAltm+mggzs9RIlTwtGUwPVQVcHf+I7vZHToksQu+5vb1Mp11jeu89R73ssz\nz32D5WPHuXPrNS49/A6GYZ+PfuBx7r/vNEtWEUUrMNfK8+rVGwzHY0gFxWIJQ8+hCoMwVonjlCAI\ncFwXw9DJUBkfdolJURUFq1ynNnsGLV9DVk0E8OBKg9euf4Yzy8sYJQU39shSlaJ+tJmxoufob+4w\nGYxYWF3kzn6HxpxGx/lNnO4NPDaYXZ7HlWc4uXCKsXOAE3ZRQ5f67Ig410Evb1KUy4zbfUrh/Vzt\nbaNlJZzdKflmgaXFRcrKKuX6X0j6ALyNJ4tILdruITUzY8Immu3h2FXmarMkueu8MvoSTWuOad+i\n0iyRiwW2d8jy3AIdu09vGDPs7xBFLrpUIVdRGQwGtIpz7HWvUtRnOHlilSDJQ/fFNy8471IftkjT\nCjq7lCtFxocj/DRGUzISP+JHvvMsbhhhj2KCSOL1zSHHj8+Ruhn1uVnKJYgDn0a1hVXKoRsFzl68\nhKHrhKGDkHWyNMJ3R9y+fhXNnKfX75C6EaoUcDjwqJZyjH2fRnOGKI4JPZtmuUS+XEJWVVQEqmGB\nJmGPeuxde4OFl58lS12Wj61gUCWIXbIYhGWQKzaJ/RhJAUmSiDxjPB5JAAAgAElEQVQXSVMg9oky\nCUXReOdTH+STv/1v8dOIVEnQlByKFvHep9/Nxq3bPPbwJXbbO4x6MY2GxurSaW5tryMUn5JIWWgV\n+fwXv8F3f3uTcrl+t6PBsBj3Dzhx6n4cd0Qap6iKRBxBe2uHSmMeNJN8zsLzIlRZRlU1PN9lEPVp\nmAVu9F7Fnrgst04wDhIOvKMnuq1MWHiwybU7t6hPTyCVNznYn1LVLzBJRkiSjp4OcdMpZ5ae5lb7\nGbLEo7WSZ9+HQjEj9uvMHXuMctohVLvU4jxmaY6HS/OsuZ9lbTxhFB4yL/5COiHgL3myCCFkIcSr\nQojPfPPvqhDi80KIW0KIzwkhyvf89qeFELeFEDeEEB98y39qrnOucZHRZErYMyjlmjxUfxfhOEQW\nM+TFBRxHA6FgJxFGnKGHOhvrL9I9cNlr3yTLIrI4T5w4SGKEpcv0YgknVBnt2Ly88Qyj3lEC1mAU\nEVWm5MMUPWcgawqBF0AmkcUykqQiDIiijMlEMJpG+FFE0cxx7sIqx1smhiJYXDqNls+xfmed0xcu\nYhgqiLtEp5qq3IUJkhTK5TyBs01Bz6jUdPwYcgWdQJJRZZ3Qm5BENoVynmK9RqGcw9BVhCKBfLcI\nmjOrPPT0x+i6Cb2dIZLeRC42kCSBQBB4EzISNE1B1w1UWUNSZNLQJQ49kuCQJBzSaMzz3d/z4zz0\n0PuZjiYstgx29u5APGbqDnnx+hXGfZ/m6nkmfobtHnDpwuOcWFxGs6rsHjho1QqvXnmVw2Eby8xj\nGHl0XWN36yamkSOXLzO1D0mJqC8ts715m2KxiKoa6LpCHKdMxhOSNGbsg6oeRw8tFE1jGB7gDXpE\n6dGxYuISowODZvkEA/UyyVBGlCu40QE9f5f7C6cICx6arrM1/Tx6Bkqg4CcRgadx0J+hH6yRytt0\nRq8zjvex8hqp0uF25zZeUCTrRRhpC2f6Z3T/VY0F+AfcBfv+E4f+r02TF+1U2Ixvkq9GLFyyMcIG\nS6USj156nGK+iZJOKEcZUqqxN9xEmIKCdZJyfYm0NyFRfISc4Rl72KGHKRaJwzyxewfF8Lj42Dtp\nqE1GfyZ1vLnfJetX2WIPPwzZ2+8S+C6aKSFnMqYmoco6kiRRsMAjZXlxgXMPPMDcwiyuSGnNniRf\nyqMIiXe///0YlowiII5DRBYwneziuUPi2KdcMFleXqRWUQj8A4p5iXK+yWhkk0Q+kTNAxaNRraCo\ngixN7k4lSgpEGVkUE6UK+VyZ+y69gyvX3+D111/AH/Zwh13ccYd4aoMX3MXlkO8yD4NAZDq6USOZ\nuCiBQNVk6i2TD330o/zIT/yPpHqZF166yWsvvEy5tsoHP/aD/MP/5V/yQz/8k/zgj/40enmGJ979\nBKZikGUpy6tnef/jD9OoNrh96zUGg12yMLgLXZTEbN54DUnKCKYhILByeZ7/6pe4c+My9nSC4G46\n3pmO8V0XrC4LK1VGkktB1HF6EZKisNI6mr619F0maY/haMjE9gnUMaprkBM5Us1i3dsnHtoMxxvY\n9h2m7hrN1iJq0CJLFApyiDap0xm2UYoe7TseE/UqsZ/wsfsfYqG4TKl5kla5iO3u81byl2H+WgA+\nCvzPwH/9zY+/DfiTlMUvAV/5psH8KU0esCmE+BOavHvZwACoFg06m12adYmCVAc9Y0/uMdpsM5xO\n0BSZ40aJLWmAcBc4f/JJXn/9q0TJLOcuPsJG/HmMSYM48LEWyjjTMXvTHnlMCoUmr73yWRqzC8RG\n54jeudlVep0bXFz4Dg7jzyEpEbo6SywchBwgK5AlGYQJTpzxyOPv5MIDp9nd2sOZhCytXqDayBO4\nAY16jULNRFEkZMVCkySiKEVNIpLIQU4jVMvCcaeILKVYKNAf2Hi2D+4IqzVPsVwiDG3c6QA5KxPE\nAlU1kaWEOEuJ4xQUwfb2DsVKlZ7TJ0xDNto3qIcWRqUEQsJ3XUwhEQYRSDJWsUyIh2oYyLVZ0sAl\nCyYYpSpBHCFrBj/xj36eietyevUYjmvjukPSNML3x6g6/Lc/9T9QyhV57B1PQSrwgiGvvvAStnuI\ne7DHSy89g35eJ1eoI1QJESjY/R5avohp5Nl+4wqqatHvj8kXDrm1fZN6a5nRpAciwukWGEhd/tMP\n/Dy/+OmfIUpkJHNEZ3cd7klK2SIgyd2maTYwpG/hua//Hh99d4n2rS1WTlxi++A2ktpDuaVhnp/B\nVU1UcqSuzcjz0LKzbI2usDijoyWLrJ7x0bxHuDP5DAtzjxGGDkbRJBipzOQLvJX8ZU6W/w34RxyZ\n9ODfRZO3e8/v3pImz6rmmZ85xeFmQpYVuLBwiVzZQy5JiMRCkYrIKliFCkZS4PqdLS4tPoCmhVy7\ndhtnv0moeZRrVXJSk8F4ypnKHJ43pjeaMHf8EQ73h7Bz9OKH+zfJ5XSG/TWUYp5yocH4sIOqyiBF\nREiEUYQbyfwnP/FjnDt/joP2mCwOmV9eZn6piTsZocgSRtGgVj+GQEJIICRBmkCYeMSJRxb1iUKP\nzInQrAK337hJd+2QN67fYOwFtDtdbq3dpL23Q+i7pIogywRh5BGmUzy3j+f2mI63uLm2iaEZPHjp\nKax8DsULKR6/n3TgkmUhpqkSRzFCVtA0jSzxMfI6WRqjqDqKpkMmEUw90uwuTvLMbINTy4ukSYqi\n6ghZRkVGklNKxRolqwqZgioyPG+APRlz/oGHkEn49r/7k9Sbdb70pU+R+kPiRJDg4UYu7e4BGxt3\ncN0BnYMOm3fWaHd7TANYv32F8djjWu8bVFsVuv4Gl299kXI5x3ve+3EWaxcxj02O7Fm4W8J0HmSU\nGvi5P+D0aZOb+wfEyxOcaR+7O8INS7ymeHimxcGwy9X2C1x1bpPnDKQ+y9o5/EmMm27hTlMKlSkl\nrczu1k2KjQJCilmYX2Zz5+jL9V75dxqLEOJbgIMsy17liK2/Kd/kX/kr0+Q9/5kbbH6xzfCGiuhU\nOWgP8OIEu7NGpbRC6AX0xwd0+zHHz51huLVGIns4/pDV+xoszRn0hgO8fszhYERRaLTHY/SgyLQb\nsfbyC0QouP7ROku9WKBVvkBP+jpqSSApMnKlcJfmOhZYeRPfC6k1FkjcPt3dq/hBj3K1iheF7O1s\n05ipsbA8h2WaSEqGkGSiOEDKQFYUclYDVTbIUhlnZAMSgediWhapLGNkgslgxHDQIUplRhOXSb/N\nuLeN7ztEvofvpkipRJLC669u0jvwkTJ44okPsbvR5/KNa3R2r6IuLyArBu60h0SErAhSSSbNAty9\nyzjddVIpIzezQGVmEbOYRzc0DMMiiROSLCMTIIiIYx8hCYr5WfK5IkkSEUYOQeChaQY5M0eSurz7\nqe9kZ2+Xj/3NH+Hc6bN89fkvE9pjQBCnGrpQccZj+oMhqqayvbVFFPmUqy300hKWLPjs5f+HYAi+\nJxPrFmkCE38HVZ1Fjd91ZM+KZgkCA5053l38eyTKHBWjhhzfj5ArKPkim9d3eeh0gbVrt1jNneFM\n4378ICOYgtd2mAYRihLhdOZpFU6TaB717Aw3Br/PK994jquf3uCzv/NlelfeGrHi3+eGvQP4NiHE\nRwEDKAohfoX/ADR5F74rT1FKKOfOsMIym16fnNIkN6+QG2UkhYC1A4cV7UlqtVM8/HgVVTtkzihi\nFle53u5ghwnpeMjM/AQnyyFtSyw/cJrtOzuU5kN6Oz4PP/YUz4/eRHXsdDP8Wy9zbPU0035Mo5wn\n9CK0vIEsRai6RnnmHO/68PvY39zg0rvfjdd3yFfyiCjACyLKpRaes00augRe7i78qZSRRh6SZOAF\nARIJslEhFxsEWYiIMorlMnu7G5D6aJnKzNwMMglyo4KQVSRFJ8tCklgAMWkiISsRp07UmHo2zz37\naaTY4Pt/9Cf59X/7z7EdG6m7y9zscdAKxNGUxEuRlDH63BKi/DCGkqe9e5tCpYHnjUkzwd0wMiNJ\nItI0RZZlfN9FUzWiKEBIgsGwSxj5FPIlZEkjJUOSdaQ4wnf2qBdraK0ajzz9tyhc/iMkPWQaW5ix\nj217xI5H5AeUCmUG3X3at2/QWl7hD/7gN3nfR78Ls1ZDpBMahTl6m68gxTLb7RcYbUvIzaMVfK02\nopCkIDX53evPUc+foGxB3TqB73TJYo+H36eTR7ByIuKNl6/Rui+HmgpCWyDrBnVDZjJZwJD3GdtF\nzNQhtCZUpsfQVly0hRpPPfYDPLv2z7n2pb/4dPn3kRn9DPAzAEKI9wD/MMuy7xNC/FP+mjR5zeQY\nqiIjAnhxeh2tCqP2dSYTHd+4ieTrjIVPsXzA2st9DEvh+f1bpIlNI/YYjrdZKSyRiJCDdY/6ckD9\nTIO1N65yamWOUVRlZdZkd/fyEebbB84/yfPPfB3XzlEKI8aOR2PhOL7vYKg55o+dptqokSUR9507\njyioRHIOTQ2RlBxGoHBwcIP++g3mFu5DSBK+N0IAaaSQyWPkJCMIU2QkhGaShh6RKFBppZzTJYb2\nFEVSkaOMLMkI0yH1+gKGWUIkCUmSEoQ+Ig1JFYModVHTBM8PcbwBv/pL/4r5mRLba3ssLYXoeZNq\nsUGGTs6okMgJiaySkdHt32L9zhsMx4dU6lWqtVV0HfzAISNBFjJBkOL7Q+IoI18wMdQi1UqL0biL\nLKnIioZERhI75HIFwMBzfeIk4h3v/CBPvOsDfPULn4SpQ2/PZzruEgQJQeBhOy4TP2Y06BPJEXbH\nIU5SJCHIJBslk9hxdlhaqhCOWsws6UyCo23ygReRlMtIvoHrXaXv7aHMPcaZ+iwXL/wNfuuFV7m+\nnpGaGxybOUt9QWft5jqBo3PmvkVmjWN0p3eYHExQmlA0W5xZucBnnvktPvDwR/nalU+hqAqfufxT\nEJZ4K/mr1ln+xKX6Wf6aNHk5b4ZxusHxxRWG3haHhyPsdhezeJJ0NEbTK1SqLbZGHS7f2qFVSgmk\nPHOLTRw/4sLie9gYXsYN4eLjTdZ3D0nxmDmm4mYjpDTBsYvY1voRvVcPv0RtXkNSDjk1c4YLl57m\n6msv3W1ZXznN2JvQyPL4oy43d7ZYPH6KnGkRBQ5FXQVtSqlY4uWtPWZPPoGu6WRZDuIIRdMIgxhf\nmHfb+0MfKQ3JQoGRl0m9EqVSGaO7z3Q0oVCsUchbDEYmsfBJvDGpLCFLoKJwF84sQaSCKEoJfZep\nr2HZ+1ztJyyfKmD7Dnr/AEVREFlMLCtoko4W33XJZCRax5apVGrEkc9h7zblUgNd0/GTGD1LGNsD\nFElGtYooioIkZdjTMYah4nk29nQPTTMw1LuuWRC4FEp5Ij9mOh1QrTb5lm/7fl569VUUSearv/vb\nuN4Ee9TGc3x6fsLs1GDg9Fnba/Pa9T/Gt7YpJ2eZhGNmaicQXgUn7iAJG88/6vF3gwnp/pi58v/L\n3nvG2paf532/1evu7fR6e5l6p7EMySGHRZRF01ISFUgyJMUOjMhGAhg2giCglAApyIdYsQM4sQ0j\nsgIXmqqmRFHsnMKZuXfu3H7POfeUfco+u7e1Vy/5cAUodDgwkCgYBfDzaX37/xfW+2C9/Ulx3BBP\nDdkQQt5970948+Zv0o8OqRRXsbJFuqczNCshl9OYzy8wPR1xlLWYKy+zdqlCxVjisH2dw61HrK7V\n8VoielAijI4ZuHlE6Yfd9v9HZMmy7DvAd/70+f+1TN5e5/uIchG3mLGU32S8u83Khs3+1ox8XYRw\nTKN2jc3KIoPZHzDtjdALIkeHfYq2RFKIyEtFatUyirnK8sotZj2VSRrQ3k0QMpDzHfxxGYp/tgI1\ndmRqBqhmif7eEdNLPs405uqHPkEWydSNEjPPxTDnWF80kAWR5u4tFpfWHy+wCHzc0ZCNS9eo2DXE\nLEMUUmI0gmiGYRVRwhRPECB93A2gRwZSLg/SmDgKqZZr1MpF3MkUWdCoFqoE0TFh2EdERlUtkixA\nknSkNCDOZCQ1xJRsZDNhFkaEgUmUePTahzQf3eP5lz+LogQkqFh5G2faxtBsnFmfwWkLUfBQJI0s\nnTGbRgSKTRJEPDrZZfP8E8ymE2TTJs0yBFFEkVWOT7YoFVewzDxZJuAGQ9Io42DnTQ6bh3zisz9P\noZgnTRIiIp6+egUEiSQVcCeHNPd3+MZXv4Y7GSAkNXS1wqXL1xhqbUryZYaDPp4tkbo9ioUExASE\nAEH8YZmQJEjRdQPVyKHrNiUjx1SaYRVz5NIinZ2MdjRlfdXg8M2M8uKEmeexsbZMZ3IfbSIh6zFO\nlNEbfpsziy9TWKmAP+I0vs7Y3UaXTeYwGSTvH7NIX/rSl/5ddv3njl/7tV/70me/+GFSBDJRIhRO\nsHMlxt0I9/QUR+pi5uoMO9tEQpft/QPUuRTNytiYW+TS2ctsHdzhidWPMRIcMv8m6tTmaDpFiWPc\nnkthTUQVK6jAnvVnmiur7TJlWaOemCShyOaZKyytnyeniRA7lAoGgqoQ+2NyRo5YrUDgoWs6uw9v\nkKYZdrlBo75AnKRoBiRxhibJIIlkiUiKhCbLkIEkiSi5CkmUgeAhiSK6ZUAKciZjFQqoskmKRxgK\niMLjyUzVyJHEY4JMJvA9skTBdydEfkqvPwExZW//kCAUCGMRQw0xdIswcGjt3sEbndBuNxl2D/Am\nR0iyyIN7d1HiAHfiI9oKWZYiyilhHOHP+iCkDIfHpImGoolYlk0YRyiKTgpEQUi1MU9tfoNvf+33\nmfk9FCkjTmKKxRoZCY4zRohDFFNm++FD0GxmgxFDz0Evn2Xh2kWMSoY0abO4+WGS8SFFs0EmJPiJ\nz1rtSXb7N9j/v6jmnhVr5FWNMX3OlM4gCzO8KCTqTdFFi5JR4LlLn6Q/zAiMCZHuUdDzGPrjkQPC\nEWP5hIq2ytj1GLQOkeMp9x7+gEweohgZLjItZ4zrOZy8E/ClL33p1/5tu/3A2l2Wa8vcu/M2mxtX\nmXgm90dv0n00JrXAjIocPopYXzzH/sM7rKyu4py6SCWf9vSInJdn6dwyB5NTypbJs9VfZWVpm1//\nn/85EyVk/uwiLeeEslpnlvywRmAQ+Twcu9Rsg6JdxbAFVDHj5HCLLDEYD/fxZxK33rvBpQsXWF5a\nZDx1mI67rF58EcNeoNPcQs6JOKNDjMI5Qq+DWlhDzTL8aYik6AjCYzlI1wuR5Mfn6qpApqikXoRs\nW7iphJhGqFoO3zUxVIMk8pEVEyFViDMbOXWQYokw8pATBbPeYDzcZzgYU6nVmA1OiKwyjw56JCnM\nz5c52B+hKiNCb4ozGxHFCotTFzkz+N5r7/Hxz7xK4maIakwWZjSPdrl/85sYeoFnXvgMqXQIwgIZ\nIaQSg8kUSZQxLIPI98kkmWdefJXvffuPefvN6/z8L/wS5dIyCDHDzjbjcZ/Vs9f4zOc+z36zySxy\nsIwCSxcu8tM//knCTOFv/nefZ97u0UpiqtFtTP8ccrGDU/gtnrmq8K1bf/bNTFNiOhkiJSbklkGQ\nsQWdR9ObDDii3wzpRgcMvC0IReaM84S6Ts0qIdef4Puj7yLvV+hX7pMrxqxcXGW+/GFOvSN8aUTV\nvsJm9SqDwV1k4Zh3mPCj8IGRxZ0NQJxwd+8mUXxICCyuXmLqnuL6UDDG7O+00VcF7JlHrMNotsez\nKx8lHTTZOR1Srb3Ce3f+JU99epFb7x5wbfMFDts77HgjCsMKx/4+hq7/0LlJqPPq8x8nuPuAwWjM\nyaMDHHeCUTDY39nBGw4oFAw+9ernmV9Z4pt/8GWefPY5TLNIEMfE7T0EKWDr5n1qS8vsvned9fMX\nyGKPNAnJsghNtYjCABDQbBMZAUXL4QcuSpLi+jGC5yGikwkiomJSMJfxJ4/IF4sEfgxZgoiMgoGb\nPh5BTrOEwcEWEylm48I5To57PHP1Cve3t8iXqtx/7w6i+CLdaRcpVYm8ISQhBx2Fk+MbrF26ghQM\nuH39DpqeoWgRxco6spxhVc9x6eolcvU54tBDFmL6/QG54gq5Qo77d96mYClcv/4ml86d592bb+KG\nKc889SyWVSJIXG6+/q/wI4PK3AL93jFZCifNPf6TX/5buP6Q3//q18iST2Co8A/+zu/xy7/+PNVG\nnrFTJLCOEZgxmmbM5X/YLE9bI4p6jefPfoG7O/+UVjChZm8SJD7eYIQ4q6LoMnPRBurcBudXXuTm\nw3/Nfv8G9/7wX/DJn/pJFuRF7jV/kw8t/ww3Tm5jl1KefHaD02ZM0HIIjD6B7/Ho+P23u3xgbtjC\nesrwNGCWesySNsppkdjOSNQAU6lSzJvEZorkGriRR01bfDyS2u0zbk0JPJ3N2jJbrfe4c3gT3XyG\ntVqJb7x7m9mhg+OGiNUqhqTSLPyZG1bY11BUj2ef+CSXnr5GEvrEgkKSiFi6zdWnr3LlyRcwDIXm\nQZtet4UsWkSJRBKOUGQbo1SikC8yGrWxyzXC4R7jQZfBdIquysiyhCDIZFJGHDgkUgJJhCrKJFmC\npmsomk5KBoKJEEQIkoKWL0IoopEgSBqKauJ6HpokEyY+k/GI/ZZHpXaGi5vrlBsLTAcdetMAS9c5\nPWzSHQ4YTiZUqnNIssF4MKO2tEipWsYZpMSJx72tRzT3DxG1ec5eXkMQy6ycOUe9XOe4+ZC9nbvc\nv3eLe1uHhKHPyso6D26+TrFUJYodOkdHKKrIZDggCoa0mluEsUe3PaZgqVi2QJaCqOTp9dtYmoUo\njjFTh0tXPoYoCKRxwsQNeDT4OtNRynjiU5KWkL0Ce8cyj7I/k8ornujoOZ/JqM8oHpPTUgR5AVGd\noKoVXn7pr3ByepMnll9CUnJ4fofxoEexfp5yucCwPWQsx9x/q01uTmQYHnN6fA/fneJEexSyEoqq\n0Ekc8mWdu187+Yvlhp0EU2jkCEKHaUdDnU6IcqcoqU6xaDL0TlFmKpZRxx+79KL75OICoqRwJJyw\nlDPZOnkNNRVxHZM48flXf3IL8jI5ZZFGpcHuzi6zOfWHzs2bJk+sfYJizqBUrtDcu0cYZxDrOKM2\nWTAgDV3qlTLFIjTq67TGLubMp1jUKZQaZEGCma9TSgIU3SaIXQo5mxiV6bQHkogoWGhyhixBFAVk\nYUSkiMipSioLJEmGJgjESCRBEy+TkTKBNBRI0phMFUjCACUJ6Q67dDtDWm5KKio8e2WNo9YxQRBy\neNpjfe0sO7sPkaUQaTpicfNpgtmE+cU5LLtAEgvIUo7D5tvkLJFKo0rie9SrOjnV4Le/+lU+/PIr\niBk4bsTTz/44mm5x4/q3KeQ12q1jVs5dpVpd4OBol0iM0QIPVRfZ2R/TqCT4wgPCzKfTG3NZydNY\nLLK5eR5NsfjaN77MtYtn0JWEkAAxTbDtPJ98/qf4ozv/iIJqkcgalqTTHA3ImeYPCVCBROKr5Irz\nhHHIYNTlQ+sv0B9JjLwJvdN7nF39GP2kRKd9h9To0go7nJmUidji6JHK+XmX/ILGQXeKFyRE8Ywl\nu87pCCp5i8A4oaJI6Poi8DY/Ch+cG5aCMAwJTQ9RtcivlzAFG30hoT3aI5fVicwEfzJlfukM41GT\nJFPwU4HnF15hp3Mft68wnuZozG/w1r33iGcuyAJifsjOwZT5ywoj54d/q69e+wRXz11kd/sOgpSj\n3thgPB1DEqLrG7Q7J6wKGqpRwZYaLG6UuFpf4t/8i3+GIVcIEpUk8giCGG+W8ODW6zQaFTTZIMlC\nbLtBGqfoeZUsSQhDUFUT0oA0SVAMlak7JvViwjhCCgVIbUxRYOYMcT0fRdcRvRmdVhc/ybh3NGHm\nBwihRK48x9vX38bMa0zGKUvzNR5t32NlbZULG5tousvu3oi5+RqaoiIJOofN20iyzYc+8iG2HuyQ\nL1i0e6c0j9tcvibx8c98ntDXufPeDZ585lkKpQqKYVKrzVGqzDM3v4wzHWJYBdbPv8g//R/+U6qL\n8yyUqiDKmEqB2HcJvRDJkAjTmMbcMr4XMjc/T+pPOeiMufzsT6AKKkgpYegxX5/nTHWDODyHG9/B\n8UIWG2dJp/+WmJFRpFJaYzBrEcoOf/XMF+jZ81w59yu0+w/5o1u/gz04QcnlWFFrGLVnOTcHllhk\n2xHZvLJE0PKYGLfJkFlcvEjFLvGDrW+g2LDPfaSJy86tmA+9UOL9ILxPGeT/UwiCkP2l/+oypycR\nhibTbw6QjJgsFSkv5SGFTErxfZe5yhz+MCYUwTbWONi7i1GZIs/qJOM+YskgFSRUS0WUMpy2g66V\nQesg2jLhVOIHSwf/7kv9e/yFxX9z9u+ydfgm077P+c01FgKTLTNGQ8Qd7iBIkKYVfDkgYoQgZtjW\nAMdT6Jw6XDjzceLIp3tyQCJkDJyAQhU6swmik/Dkxkd46PwRc/IV9GKBf/wrv0+WZf+39q4PbFKy\n1Zsy8ydM/S7VRo7Ujnnxo38JIdUZd8c4h6DnFcJZzGDkMx0OGDU72CWLqnWBLEmI6yJRT0W0E66c\nv0zqd5GrKZk5Y+RHWCyyMnfpg3rFf48/J9zcfYur5z5ErEr0evf4wfQWZuyw1XlIaiZ00wleInF8\nfB9FWWHmDQmiM9ilHHqq4s2ahM4DPvzSr/Dhpz/PmdoqWZSw0jjDy8//IvujDlX7KTqDU2r5pfe9\nxwfmhhFoxOMMChldcYYxKvHd1/85uXwDUgu9YOK0TkmwUOUcohJjJQ59I6JzdAiZhjHL45Y9pEjh\n7o33GMQpxWpE4kfUi5sks4DDzqPHTTf/Hv+/Rawc8P2bGetnVV7/9ilhvUff65KSks+/hCOMGST3\nOXvhIjllxslJDzWqkaQSU61LtjugVH6OB3e+RbGyxLnzLzFuRihoKMKMZ9afoDW9izn/FLfe+vb7\n3uMDI4s3bJPERaraPKkfEOVmaNkqoppi1yROjptc2rxAOxpiSFNmnQR5zaXsFOgzZeOJCzSH++TD\nAn4nJhQSyrkaBzdaLF80mE576HKeXHWOn01tBtMUsyTSuqP2fD8AACAASURBVD3lL3/8M1Qbc6Re\nSP90ih8M6DRPWFhZopQv8GBrn2dffJXVlRV0w0TWM5JIJI1CVNUkTQOCMCFnaARRTOe0DSLs3HyI\nbohIsg+JTM7QiVNI/CGqIpIkKVbexh2MGbTGRBkMxz2iNKXT67FQXcbO29iCgeMHSChIgsTJ8IRB\nf8T6U08wt2Bx68EWdi7PYk2HdMxvffkHXFgrctAZ0+17jCcxsqJyfsXkxY88z1uvf5dSocJTTz+F\nkbeZjPqUixXu3b7NYc/ll//63+T6zXcYNm+xv3NIY7FBJlp4ozb77QkvX3uefF5FU3Teunmdj73w\nJAE6f/TNb+J54AYqeUPg7MZFVtdtQj9hbvMiT12+iGrYqEaOydjFyudZXz6LJMnIskwQ+GRZxn//\nT36d5vgt+u4RU60Nbo16dZ2MHl7kMmcs0kpOEf01Lj2/RNJ9mpKRcRB3cNMQN2qRE3IkzjuMpHnG\nJxHVhQnDkymIJdbmX+Q0eETn9jb5y2W29iRWahvsen9Ac9ol6Gic+K9hGA2k0p9Du8ufN1YvbNDd\nmzIZ9/DbAfa8gqVLuMcDciWBsqVwfP+E6tkGVbvImG2iNGSloqPkcnS6x6iBhyIHpLUEzZQIBjHz\nZxZI5QDZU1FFl7IJEzfGk2co3jwLaxZJGOCN2jCJGI6mxFkIqkK31SSc5JAQ2L7/Hmsb61iGSb+7\nw2TqUl3YxA9dVEnHnw0xjDpRmHLwaAtdU5lfX2Tv3jZx6FObq9I5aaFLoOVVUlIkU8HzZoSCxMnw\nCE2yCOMUIRFZrM0hywK2YaPJeQqmRiTERGnEmbkSw84xkgT7x12STOHksM83vn6XjfUG1WqBxaU5\n7h5OESSDmd/GljJ0Pc/enTs4oUo48Gl1PPzdLb731i4LlSL5gsRp2+MP/uX/hm1aHDRPqTUW8MYR\nlu2xcf4pVHWbt+49pFpvsFBMsWWRxC7x2nfeIBUM8jmdQs1AlBRk00JIC+jWjJIu0T5uktMMxFyF\nXMFk0D9lffksURgjICLLMmEYUrV1moFKJkMpyjNNFGQnJtRN8plMkojIjkon7hDEMNEeIjll7MzE\nDzqIogxOgNT/KGyEvPhSgchrgNMlHXXol/dZNDbxrmhkhsckVpkEA+ToSWZ9lVhpM1es4p+YlOeu\nAF/+kTb7gZFl1nPQTAFbMxlUBGwxj+eOMSvzTIQBBArGWoiiynSCJlkMkqtx7Lp40oRCcYmpI9EZ\n9chX8hAFNJbO0D/dR9B9fA/GQcykO2aleo5k1iOVTNaWl4mQyWYxSRqhIhOmIWGc4IxcolCgP3VJ\njsc88UyLvGay9eAdVtY+RBD4TNpdps6AfL5Iv9PC9xK6vS7XnnuJ7e1dTvYfoWsyqT/BKOaRZJ1x\nc59UAllVKVYtnGmC44uISoKtiASpiKUXsfNl6uUGimAQJTGWJTGeDojEhMLCBu++9wOkfES7JdAo\nj6jVNCZTl0Je5bf/5Bbz1SquP2F+vk4pZzHLdMoFm9uv7WPmVQ6O3sDxp1zeXObaU0sgqEy8+xwc\n91AVhzsPh3zxU4v0gwg7Z5O3NK7fPSUVUpbrGnn7DDduNDFv3+OJS09zd+sW7faAwWmXF669zKPt\nbcqlCwQTEVU6QlNFvvJvfsB//Et/ldO9Luef+zSOM0ZRNbIwQtdNNE3jp3/iV/m9//ZfIysSc43L\nVHMZw/4OQlZHDaocz3bBTAgSgbyco0AJV5PIyWPCWQ450JgWuzx6912WrBql2oRZEKKVAhaLS0RZ\nSGf6gNTPkNIIPa1gqCW85B4l43NsN9uISh1N89hufuV9bfYDI0uloTIY2jTmRIzZjPHIRVV03KSL\nKhrUaznS/JD+6QCt4DNfNZh4PmI8IwlUBtNdpImCKRhkaUpJXOS4u4MoyFSiPPk5hYnzWCK7P76P\nktWJo9tMOypPPf8KmedxsLtHKkMWJMRhRn8aMvUSDo5HPHH1Gr1eF1X2mU4yvvet73L5mcs097bY\nu/0emlGitjRPY26ZfL7B0sZlYtnk9a9/DUFQUOWEQilPQoIYCRhKijcdEQug6TbNVpdavkZeNsnl\nZWzTomjl0WwTRbWQhYxMjDG0AulkyszpYhQLjAY7HLUc9lsCuqwjqy4PbrfZXF4nbymUChaqZeEM\nB8ychMW6xXy1TLUs89mPPc+d3X3yos/x4YCAED+Tqdkmo3FAtahz2HZJwoAFq8RJu838SpHdnRPS\nSEIVHdY2F5AKFbYPH+COPDZW5rhwfo4HW/dZXd9gOgUzZ7DdPMSdhvz0f/QJpm4LzS5ysn+TNJ5R\nLi8xc30W5hcxTZNSrkCOAhlDzGIDcTzlyFdxRqcUtQHlRp3JdEwiRAzSFFuKyOtn8KaHqGZCnMyY\nVxYxXxCIx3sE7iq2rpEOEk76W5w79yS9SKDvtCireYR6xKj5iHv7Pa49sc3a0nn227eIjQRjuAls\n/0ib/cDIYhrrXL54jvfu/D62olNbWSCYzvCFEmE8RhFC/EigtiCiij7jiUnZrnPiR3idNkk5Q9JU\nGqJOezxiz91FKmSoaPiRRThxMOMS05mLb8UkkzaZp7FiBuzvbZOGKVHg4LshshgTeROG4wGZoEMm\n0O30qNbK3Nu5TXPrgLMXnmHUG3HwYIdbBy0y95CnU5EkiHnlCz+D63TwxwN+7Bd+ia/843/EbJbg\nekPiFBQpxQ1iMiEj8COkxCFn63gzHy+esHblE6ydP088nYGYoOkyYTDm8PgY1xkjyApe6CGKGfXF\nZSSxQ2LN8eDebbypxLn1SwhKxEmrjW3LbJZUukHAZBZCpPMf/uUPs1hf5Ma738cWTIRU5N3mERcW\n5vjMRy/y2vcfcPlCnrdvztje2+PHX75MQU/xClXKRx2e+ezz+CJo9grnVx1mnss33n3I6mIVTYJp\n75jFRhVV0VE0BT8OmTohhbKF46QUCjqtZhexlGFYOQyzRD5fQBRFPM+DVOJ/+tv/Bz//pc/i3HwN\nRTOpL+aZtrsEXoWJs02Q2WSuiqIFFHLLNMwGB3GPmlbFJEBI+6RBh3rtKfrOPm53RrV6BSsrcjod\noNl56nED1VwgHvZQCx7PnnuZ7nQE/pTq3GX6g4ek5vuXUj4wssyRY3ByhBLlccZj8kvnmaW3wS+R\nJCqyrpJXZCpCgb2phKZCfxCwUMkztiKG/T6+5XFyPMXMS5h2kcT0yXyQPQ/BCKiai5hmyKnroeSq\nJGbI63duIxNzNGpTmNrEYUqURRwPIsxEJRJjMkNkf+8Wb799gaVGDlSF9975HnONOWTb4sziGpJp\nkK/mcNwep9sPKdSqzC2v0rx/l4995sd57et/jK6rpJJKFKeIeCSRT7lqE7ghvusyd3aFn/oPPkca\niPRbJ5webqHqNorapHl6iKHlSAUBS1TwgxhT04hkAbVYw5R9Lp87y85BEzOnMh65FKuLlCsxk0Bm\n89KTnFsuEM1ccmqKkVNpdUPSaIhmaKQTh0O1x/ANn9X1BW7f2CJXsOh1Z3zn+j4XL5vcfe89Xnjh\nEucvnmM4GFMs6DxyO2RZjKwYnJ5OGXYnCFaOl68tYpfnefMH30HWqxRzFvghvXGGT0jeLjLw2sxl\nOggCaRo9HgUQJSJBoJwW+Xt/47f4L//Zz2JnCtuPRtg5C3Gm0wxHLNcNcmWL6dBh73iHjuJytrqI\nr6vIqcrJ6Ig55Tz98RZPXv0cbpjRGd/mpfNf5GBwl5ws8HDWxI8dLHmenHSOWB6xGFh4UUDJTSmX\nznP0owd7gQ+wN2ztOQVDFtGkmI35C5xZucKj3SZ50WNpfYlZGJDP5ZBNG3cWs9JYZjaZUioUgBym\nJtEZjHEnAtWCgZ+ITPZdDFUnCVV838WLh+RKdY5be2RRRuyBjETghfiCS+fEA1GlNwnwvZiT0YRi\nRWfUT1C1Au2TFq3jR8SzCCeYIcoK+BpxHLC5UKFgCYio9Lpt0miGKSjUFlaRkhS9PE8kaui6yHDc\nZTqeMRh28EY+BS1l7cwFPvHRZ5j22iiZTH15mUK5gjNxmWQJbpQiqzbD0xGipiBKOqpVobaySprF\n5IpVVMPAmc0QBJPJdMbS4hxRJDLt9Ljz7tt0j7usb6zyjW//CZfOnuH7P7hNfW2ZUXuELwvUyzpC\nPMF1As5u1Nne6/KzP/dXUGUPt9ND0ATyYsTZjSVu3n+Aolu88eY7pKLBfrONbeoYhsZcY471jat8\n/Ru/S14vksQJURTieA77R0coks5c5XGgvnnhCURBxg+miJKOadqossTeyS6CInK3/xpSYNBqD6lr\nBllBpFGvk9MMJEtGza9x5ex5hEzGcRISyUFyMkLBx41l9GKeTqdJv3PIYHxMZ+CSy/LsON/DdwRW\nFs8hJinOVKTtH/PU8kX2wh3udvdRSyXak3s0v+39yN6wD6yC/xv/8G+QJDnicEokmeQMhTDsEerQ\n67s0T464cHmFo6M+jnNEZzRBVDRMUcWJXOTyGKU/R3t6gmUlCMwTRw6z2QxB0UmzkLyuM5hCTs3j\nOm1KtTXcYRtDLxCoHuI2CKJCpzMht2oyr5TxA4WZ61OpLtOoSVgyHLd6zNXyHHdDLp9fI/IdapUK\ntYVVzp69xPXXvkouX8JQylTnG7QHx8TTmE77FFHJM/PGOH5APO2xc/wIIbL43I+9jBgI5Ap5LE0j\nZ1rs7T0i1jQG7S6aaqKWqkShhzfqkqkmH/70x5lMutiVEr1Wi9/+3W9wcnDE/LLJwzvHzJfz6HbM\n9sGMTz57hWbrAbNpysWLm+y0XPJmxrNXLvL6m69x+0GfMIyRZAHHmaGZBa5dnkdV4a07J2Rpyqdf\nPIsfTSlWFynlc3TGQ/qDCY/2B+RUBUnLEGKRXM7gQx9+AUmweXCwhSTLaLJPtTjPN759nfOXLxKH\nDqtrmzx77UXm5urkciXsYgPfGVEs1Hj33e/z0vOf4m/975+iN7nH6UmJkmiT5WfkpTJHpx1kS0WT\nQEstQvGYi6tP8mD7EZuXLhPtT9niPmam4ykzDDsmGhcZdATWF4p4gYRaiSmUHKJpHk22KEo17u3v\nMBmeslJZ51Q9pWBV+L2/8+6PrOB/YG5Y2E2IXYdKvYRuG4iyShKKaGaRdSXk6dIagqFg1Av8cf8m\nRlZGV03Kps5y9RKdwUMCKeVCvcJEtklCAVfIUIjwvISlfJWd/RZyZOBVDxCqJvu3TrF0nwkZ8nzM\nxAN5KmJvGqQnBgvPV0klC0OSGY0dhv2AWBOx8zajccZivUycBYydKRcunsUqFnFmA9bXzzNsHRFO\nT9jr7CNbJs3tU5bXFgn9hCCKSHwHP5WYq53n4pVzZFGMJEnkNI3scSKVmSAi+BGxVqReLHIympDK\nKa1+wPMvP8fNG++Sy7lc/84W0XjEk1c/Sjlv8Nqb71Gu2CDL7HZmXLxwFqtexHTKwJBed4RNiJQm\nnHT2aXU9pr6PrmikYoxkmNRrFpoiUVuc56+/8DIP723hOi3y9QUGk5TpbMDS2hrffu2bhH5IkBdY\nzxVRrYRISuj0h6jRESftQ5aWKpjFRZaXFpmrPqDb6WEZEaN+D3fmoyhFRDmPiMC3vvabLC6doTfz\nmAUOlaROJiSM4wjRiGl2HAIlh6WVKCwKzJfXuHPjbVKxQhKXUCSB091tfALy1jqj7kMq8+cZjA5Y\nzYckgUEwVqmtl1ksjLjr73D2bIwwzuF3ElQxQWtYRILGnFEhHL5/U8sHF7PU1xBEkSjMiKcJUTCj\nvjBP5IXoZplQlpkOZzy7eIlaYYk3bn2X7eM79E9M0t4hsRexVFtkyoh4AsN2TH7ZJAoVNs+s0+4N\nOHNmk2Gnw3gKohmzdtGi2wkQRQlRiiEK0TZtknbC6eiYBw98NjZW2OmkNOYWWN+wmcwmVKwc/U4T\nZzZCSKBYqNBu9zA1kb3jiEo+T3lumdgLMRQJVc5TN+cIo5DtQRNVlpnL5Rjg4BHhz7r4mDTKBTrD\nEUI4Y5BqiJIMYkLRVtlrn3Du0pPcvvUeTz95gaefe4LZZJVWaxvZnjDyE+79yXf48Ksf587dA3r9\nPkuLOa6u1kjoc7zT5/jYwTJhba3IbrNLlqR0OjMcx0XXdURRxtBs4miCoYqotsWd67eIfQc5HdPx\nIt74znVeuNDgyY98mj/62h+z1igzt1JltV5j6k05PWmhywLbD+5xOHQIKzKeF1PfnzI8PSbN+XR2\nT7n29JO0Rx38wCVMImxJJXAHHB+PidNjzFKRreYWpfxVfvazf5v/7B/+IkJB5enGEuNTibHTwjks\nkzbKRJbANfscB+0tZpMUac5AUHUGzT0S28b1RywV1zgdNdFlyJkTDpttOoUxolJguwVK2kJb3EW0\nAoxApzU8QupHKOb7S058YL1hURwymzj4QUyQxhTqJdqnfYadFtFkSrm4QG11iePuPmdKC3z6yU/w\nkadeYeHiKnbFwyhKhJKOm6rk8zmWzlSJIh8zZyElOdwwoTccEAkp8SSHpIZMOhFxKBA5AWIgoedV\nkknEwJ9iWTJeLD4OflOVfF6l1zploVrBNGzOXbnIykoZzcpx99ZNsmBKvzci8nwUPY8oCliGiWZU\nmXgOuqGjqjIrCw2QJAbujCxLSLSUOJaJMoGpFxKFAoXSEkEaM5lNCRPw/RgrlyeNAs4uLyPKJsc7\nd0iCHqWcTq1ao9+Jac+63L17m0q5RBTNaLdanByPmS9Z3H7UwxdiVpcK9LpdLpw7x73tHrPAJUWi\nmDNYnjN59bkVfu4Lz/PZjz/B1751nb2uz9bWKUphjnfvHFAr5Fk9c5av/eHX8UKRk14PNZoRukN0\nJeF04nFr64QbOy1awwlGGpCIIj0l4L5/wKEQcPXyBpatEqcptlUijlKi2GV/b4+Pferz1BsrVMo1\nyoUyn3/mxxAS+Cd/93Vm3YheJyJyhkiWzcJmlVXjLP6pwI392+CDbC5RV+o8MfcZWt2YYBAgRiqt\nWYeakcPLZgSWQioKBEKGoiWEZCj5IZ6/iV1aoaCUKXk2YmgRTY33tdkPbot+HGPaBnEiIlkWQZBh\nL5QZND26/QGu77N59Sxx7QLH/SMspcjTjXU0U+R05DM2Io7723hhjCSqCKFLzjQ4HPSJ3SMif4BE\nibAbUljQCT0PL4mwJA0/TiETCKUYeZSiKRqRCx/7+ItMuj5rywnLy/M0KhVCZ0aWeciGRqW2QOjc\n4sqT54mlmDBJOH/xHKZoMRj3H9d8sphioYzvzNCNApmoUK+mKKbGwJnROzxgZ7JPo15HkSVERSd0\np2Sqgq4JyBigQc4qMw0DTCFlEkzZeXRMvdEgb5gIqkl/PODcxhmuv3MfQcqoVKuPkwhTlxu3Bqwv\nzTGZTnn3boe1jTJS5vLSs+cp6ialYoV3bj5i5qW8/XDE2nyMLLSoVAsUcwXWNpdYXTuLXXhAuWIi\nKQZ2scCVpTlu3X7E8upZer02128f0B/1mTgxiZhg6hqyVibsDggzi0TMMGozetMZu80WG2vnyVIf\nTc+hKSLnL71Au71HpbyIblpIYoxSfKzpaCkiH73449zv/YBYz5CEkEnYZDZ+yKViBWnBolS6yNA5\nQRCKvLH7h/zEZ79I2OtzNGrhJR4zO6BSzdFQK8SKSUVZYRjfI0192o7HGcuk0+oR6xLdcMyZ8zad\npPe+NvuB/VkKpRqGrGNZOrqiomoKhmwyf+4sQZIgySInD3ZYqBZQtcfbToIs5Xz+LHJSIp8ILMiX\nKYl1htMJnq/hxQFFsYLvDDD8AoIRoi3qBMGM8NRCcAR6nQCjoBO6GYQxmS2TZjFaOaCiwGzYYnl9\nhdPDU46Pjtk/PmXsOMiqzJ3vfh87t8RRs0eW5tjfaXO0s8eXv/w7tI/7yKZKkiSkmYht2oi6QppB\npVimnLNZKVdYrNZIk8fTmU4YIisa/cgndHySOCMWRAYTh8ls9KcZvZhzFzbw4wwQkTWdh+99nyAO\nOGx1WVxusLmxiGXYuK5Lfa6InyjsNgfMvBTElNbxAM/rs3+4z25zyO2bjxgOR/hjl435iIsXl4js\nOgvVHC9dqqPIIU5nRFGJeersGvV8HtvQKOkShmFw0mwhyglzlRKeLyBKGcvL85QWdJgmtE89TE3H\nKKlERzKIIstLZ4njmHa3Tbu1w3HzPfqdfQ527uMHXcbDY+LIZzIZMZn2GY0H/MJnfpW/9vEv8NGL\nv4xuyuDrvPXuayy9+DTnG2fpDPdQJAPV1vnch1/lu29/jQftO2i5FVZXFdbXbMphA9GyyBVEJpMu\nFWMNUxeIXY07k9dRU4iHE559pcxBe0Tv4fsnvD4wsuRyBaq1eaQUippJvVjEMm00BDafuMrYj0l0\nk9HRY8npXK6IblTxpwGvPvlZzGyF2HepKiWeOfsUsjJFcm0iccKglWJbeUzXZhbOsAsZuh6RCiCZ\nKXECYpZRz9VwOgmFlSpqkCMVE1762FM8ur9Lq7VNlgjMN3KguYyGDmm5RPe0yZPXnmZpc51MlREl\ngY0LBSTbRpRFbFUnI2Q0G6HLCqVqA8+L0BWL2XhKGCcYskz7aBfikESUCdyInigwmkWEaUK90cCZ\nzVBkid5kxO72Fh966RqmnnDjrW8TxgavvPJReicDBv0BWw8O6A8GmKZO67DLYBwx8z3GIwddsyga\nBcb9mGFHZP/gmDAR2Fhf5xd+5tNElHjj7dvIoy5xEPLm/T1ev7FLOa9SX8hx1B3z7t0HKFqe3NwS\nVUPGKMQcNkcomo3nu2TA8UkLJVM47R5jqhYTPyEeZsSewOnhgF5nwJnzGzz7xCc4fnSHu7eOSCWZ\nUm0BQzOwDJ04CJh5LqqSQxJENCPjf/kHf5+bW7/BzzzxCi9e+UXickbzweu80f4Kx/4bnAQ3wTvk\n1vXv8eFrlymvSUTqPdr7Kc2DIaLlEngjHG9MgIDrS/hiiDmr8eql59kdHyFKIt/5ShPbk1hde38x\now+MLO3jQwLfwbTzuG5AFicUFJmcZJFFKaVqAWcwYjSdEU4GCIJI4nmY+RI793a4euECS41LWGqZ\ndv8BhlohIyVLc2i5kJPeIY/u7xMdJESZxdL6BZJYRJJM4iRATDOmsxFpEiIlGXPzFZRkxI039mh3\nB9QaNUp1i2JBIa9YZFmElBRYPnMNz5nyxvdu8uwzFxl7Htu7GQePDun/qdjrbDpDiFNmkc9o1EWR\nVaIkplSf48KZ86wurRM4Izxnhh94iPki+ZyKICVEUcBsMiYnK3RP2zjTMbol0D5+QDCd0jpu0e6N\nuPG9uzzxzFVUzcDxXAI/RNcFKnbCFz/1ImeWlpibzz1eTihE6PUy65fmeeraeT79yU2evlLhD77z\nLqsXNsgkEamSZ+oFTMdQ0HNs7x/w8Vc+SRym9HtTrp1d59H2Dk+++BRBWqBeX+UH128+FnZNMzYu\nztNuDUEQ8Gce/aMWB9tdhmMHRTbw4wH3T9/iq9/8CueeeZHnX3wOSzdZWFxGFDX6nSbbD18np2oc\n7N/Ddz28wQBtOc+D8Yj/+su/S/vha/wXP/n3GWYDPvvcL1CILnJp+UMMprsEdkZvNGDkuGhqQn0x\nh53puJLLbnOHLDvBMDzGkx0urz0FjQP23vOolyXEVGX5koUrmTiD6fva7AdWlPypL/4Ynj8jSVNk\nWSTwPRRZR8oEhDjFsCu4YQCyj+86VPMyPjKN2hxBNERXdDItYmf3PlEGk7GPnChIYUaMj1kykbMM\nyzaxcyLTwZjGgornZY+3pqcJYt1GS3M43REfeeoi455LccGm34t47ulrDPpD3rl9G12R0RWRB1tH\n1JfLhJHIfLXE9sEBVy4+RxCOOLdR52jvgFKhxv7olOX6As5wRCJq6AUbU7OoW3mUfJ7Npy/y/Mde\n4fzVp2ifHBF5M4LEJYsUssTHUCxSUqa+S3vYYdYbcPnqZbxZwHGvwzNPP0M/mXL10hle++4bKOrj\nhESnM0BRJW4/OGI88QnjmGbbwTRUIldktWHhZik3buzjewEFS6CkZ5TKBt1TH1O3KZQNJEHC1n2G\nwympLzK3Ok+5Uefk0TZBBnP1BnN1m0KjytbuIXoOnEFI7CdYJZPZ1EdQMuY2ighiRqgk5BSLcTvi\n53/25xiftFhYWUZOPLLIwwtjEmRyxSKD8YTG3Dpx4rC0fJlKcY7bW4/oHDc5PBkxHj/ir736n/M/\n/sb/StEe41WbXLCf5Z2b1zmYdlGdjDgpMQl7FJV10jBkOO1Rm2vQO9VJQ4lJMiFKVExZoZzPkyk5\nqJURlYwkTdh/o/8Xa2HFbDJDEUKUvIgfxeiKiB+6IGjERKRuSqVYebzMWnLpncxY3Gywf3xAKT/P\neDxGSS3MqgZTGAx62HkXISxREKvkizLtrogve4xGEcXCAq2TAbUFA0nJ6LZiDEMlEGLkvEAYegxc\niQtxQrd3wne+9w62LVJWFijVllBUiedfqNLrdCjZeSa+S96yefud13j2iau8d+8ms6nORuSiajqS\nLGIUy8RhStEsIkkp29vfp7K0QKk4T6t7gq5Z5PI624cHaEbGaOCiW0XEsIMbp3TGPe69e5NXv/CT\neLOAQmOBn/rC57jx/Rs8c/Eiv/PbX6U7GCNKEsMuZEJCXtFJsoiZ5xOMHqsql02Ve/e3mI1z+Eio\nekbJiOmHOt96/Q6Vcp6SbbN93GV5scBC0eDoxEeSDlncPMdo1CH06txtD3g2b/H6dpPj7v/J3HvG\nbJbe532/0895ztP728v02ZmdmS3kcpe7yyaKNElZVERFsSUIlmVEsWEpsYPEDhwjkQPbcRocJHKg\nAsG2nEgWJaqQFHvZJXeXM1tmp73T3/4+vZ7e82FXRsJoERr8sLq/nIIb59N14b5x/vf/d3XpTjzy\nZQnPEYgjl9TQmc1cCmUFTS7iRT6KrlGsb/BTP/lzLOsqBwc7mMTsPbiCIqgMj25w+tKPoxoyarHJ\nymqR0HdJAp8/+Lf/O5sna3zmI7/InY17PHPxORJRpN6fjwAAIABJREFU4F++8Kv8V7/831GQa3zx\n2m9x3f4GKxsSx8/8DNe+9XVWF1fodgT6+j2K2RkuHlvilnuZar2MWltEEt6GkPs+k6SEoStIfZtM\nsHBD5x01+wNV8AVB2AHmvMXciLIse48gCFXgd4E13uYdZ1k2fXv+3wd+/u35v5Rl2Ve+73vZb/7z\nX0HMBDIBNEElEhNUItZPPMLcDYj9EAGQcjK+ZxHZU+qlAoJZxXEtdvfvoddLHHUOmYc+M3+CLAuM\n3UPswZheBKakYjkuWaISRR7lhTpHt7ZZWlkhGkVYko/ckFlWahTGKmvHiixvPIskZww6Y4ycTpb6\nRGFAZzikuz2i3CpQa1UoanlcZ0I9XyZfKOHGGYIUc7DXo95co1lqsFYpcTgc0Gy36fW6vHT1W6y3\nK7z3uecwGw1kDA4PdghSjXJZ5Dvfep3RUZdPfvpDCKlOpdZC0BS8+T6el3K0f0ixWkdKY373X/02\ntztj9jojiDMkUSRn6jTLGv2xi6DDs489wqvX7+I7PmGSoSNjlmIUKYeiQqVgcDiKKJsySZqQBAKK\nqb1FwY9goWXgzn1WVxrEYUZzZQEvzLi3e5ebD3voqoAoiRhFE3s+o1Vr4EYemRwhpjGRILO+cZZz\nG2d47uxZWssbDDr7zKYH5PMVssjmja/8MetPf4zN1iJme4WFxZO4tsXR/psYZp3h8JDF5WM0Wm2S\nRETIfHJqgdeuX6G9uMSd3nUu3/gqNx9eJ9fUWWyv4cc2vlclFWwsex8/FtH1kDAKUM0pilBmuR5y\n506V5YUKbmZiphGCGFBXF/knv/gHP1QFPwM+8Dbj+M/Gn0Xl/TNBEP7Lt5//3vdF5S0BXxME4WSW\nZf/PMCSEVERWZaIoJFZVMscikhX29m/SWjhOHImQJiRhiCypHA165HMGJcnGlgJEBSQn4OXb38FP\nUtrNEg/vd2m0GsgLCyi9AeOhT0JMGos0axUGewPEpszMdkkEnyySyayEycjlmR+9RL5SYvveTYxc\nkUqtjSyJjIceqgx5vYBamdLf2yavy7y5d4f1MyfIlQoMR0NUSWTc81FVjd7REa7jc+/WFk88donJ\ncMS1m1cxiyUcHyYji/5wwuaxVZzpgGI1j+6rPPfYCV5Ox/TmEboU0ntzi8D16I36bJ66QBjavPKN\n6/zx179N4KtcOLfCzIrpHXUQRIEgdCkoNQQho6yqHNx/wMVjK9x4cMgTZ5cx2mVu3djh6HDKYslg\n5gqUchKqoqJrOWIvwskiREkir0KtUmRuz/E8n+t3OvzMxYt0hkfsdGYYOYksESm2ymSBh2hKHBwO\nUXICuqGTBDI5Q2Z18QwffvISgmSQpCGikkORcgh6jslgTNx6hMff96MgyGSBj+vbZCJIegklV6NS\njYiCgPl0hGoW8K0h9w5fp1xoY5p16vl13vPYJzh97nle6v4vGNEqhjynM7+LKJqQKyDbM0Yjj2Cg\nsrS6RGbUuP5gl1Z+EcHWSQ2X4dijUs8z1Lbf0QT/Ptuw73faDxWVJ2QyuUKZ7v4+DbPAVNTRFJc0\nSJnNhqhKmSiNSKMEydDIN5YZ9w7R9E00PY+pyBx1t1ldWuFo2qUzmhCLAkdHA+p1kzgJyJkGpmJg\nBQKDzhDZCGnnz9Gd3EXWCqRuynTocPzsGqvtRbZ2HxJ5PqPhnFp7E12FTmcbIc1jRR3ycoPyZp6Z\n5/LsJz6Df3SLoqYjFassL51APC0w8QK++Me/B2qRydxh7rlkkY8T+1y7fJtWPceot82Js+e5efUa\neqHCc6tn0CpVBttdKgsnSGOLgqJhtE7R7z8k7wXoqkLPdlk6fpLu732RZk3nta1tYs8nFTNUSSGJ\nIw76XUg1pnaKtFhmcOs2QSxx82DMo0WV2WBOTpcx0KiUTfw0RZIUJDWBJMUbzVEQeezCCfRSnSef\nepo7N2/zeM7E86Z85TuXybIQVStQbBjMZxZCpCBFKoLkoegGYjHF1A0K5QI/8cGPk4RjNEXBzJco\nFutIGyfY2bvHY898krMX56i6zr3X3sCoSWwWL1FuL/LwwQ2qtTx3Ht5GM3Isrx5jsbGGoSlMxwc4\nkUM5TFhd2qBoNVDQePHGb3MzuIwYxpCMMMoNjqk5Jrk8KgpTx+fefZv6WkQlqxDU75CEBcKZQaja\nDJ0eJafxjgb4Qf+GZby1QrwqCMLfePvdDxWVp2gFNFWj0qzjhTK1egkvyOM7Mqnv49pTRDElCGLS\nJERIUw4HAUkE+byC7QVEsUMY2mSCTCpEICkUmxXGaYwbq0CKki8hVCTMpkKQ5ZgIE9KcR3VDQarH\nNDfKmKKAk8ywwphnn32Sc6dbnF0pc+X111nYXCOnJ0ixznTSwfZSfD/k9Ze/wngwQMrnqS+2yOd1\nJtM+K80GP/8f/TVqUkick+kcDvC8kG98+wrjkcP1O33udAPOP/okH/xLP80Hf+RjTPtHpIHL4mKL\nJx5/hmZ9mVmoMj3cpVIsU24usXVvm1evvAHJBE3SIIPZyEIxDArFIqKUoKjyW4RLWSYJRRKthl4o\n0ViqEFkemevwgfdfZGGxjqjI2FmM67msNAukjoyi65SqNRxfIIsDNEnm5o1r3H3wkI3N43z78hvM\nxw56uULzWAVZlyjUc3jZHCl0uHhxEaNgUNcXqCh5/pv/+L+nUtRYXlqhXWsiIeBaA1RNgQw0VWQ2\n6TPuDjh+8RyXLn2M1uIxyvkaM9vla1/5Gpdfu8f9mze5dvMao2GHJJMpFRsYacx4ekjk+jTyJaJ4\nzNn1HwfPwwq7fOz5X+L8yac5OhxwZM3IpBbnz5/n2FN5FlcWaJ84gWeFCLGAgEx/5FLMTuKL3vdL\n9d+NH3RleSbLso4gCA3gq4Ig3P5/OSnLMkEQ/r2i8j77xS9QzBs4gUM9V+CZ595Ho1LCdiBXrONa\nfex5gGbksIYWuVKOYqvEYLBLsX4BQ9RBVJg5Nq6fkKYx5XqV8aSDFMqIkYAb24R+yMwNyOsC1aaB\noukIWZPugzFCLBGWp9zzQyqvi0yEHv/Hr7+Gnitz7JGP8qEP/CiXr76AF2XkSiVypoyoKzy4Nef8\n44+xuFohDV3K5VWCMOTsmfNkgoynaDz25PMcfutL5PMKnj/HrNRQpICT56ucW38S1x5jFotcf+kK\nJ594BFlqkAged29tsVivYrbajDOP3bv3aK0s8cjZ81QrNaaTKX/zr/0V7NGEoTNCNUNu3zji2pZH\nEHsoko5uKGi6hlEucHSvi2rJZFnIi9+x8L2HNFZKPPrIMkKc0NxYoL20zK2dK0iew+Zamx959lH2\nH3Z4sH2H/sRjbW2Vl25d5T3nz2LrPqoeM+gdQc2gZixx7rl1JCdi696bmNoSdtrjb3/mH5BPE0q1\nBrE3RlVzbyUVTy0C+4jNlVWi0GZh8xzu8CYyZURJRZIzsgw+9em/Su/okCj+GL/xa/+c0/UaSexi\nzTWUYhPBCxGThO0HdznzyBM87OyzdeNz6JoFus8rO7/LqcUPkRgOp0vv4c7e17g9qlHQarhuxsj6\nDvOOidjcZ3dLZ9IN6GQv0Vx8Z8jeD2SWLMs6b18HgiB8jre2VT9UVN5/+OOfpFbO43kxu8Mx/syi\nsFREE1WyOKLePEF/sst4f4/C0iqhZ6PLeUaTESvemDgNCGIfOYJ8IqHUN7n/4DZhJCEFFqQVRFMk\nwWOhUMCOXVKgs9NF1mNaZ9rM9kfIsoRYTTmIHTY2z/H4+eOkCVy9+gp5Q8ceBvQGXabWjIIos3bi\nNGvrTXrdHXJKxvkPXcRUy2SZh2vPUEoNothHIMaslZBzBfY7ByxUK9gEqPoaZ8+dwEHl1je/TH1h\nmWtXrpCrH1FRSxQrYJZLTPsdark2wWqGnqui5QsstmFp8RTj/k2OogmNjZPsDx9w+tF12s0ir291\n8e05XgQ5HRQ5xndD1KKMbtZw0zErj20y2r/J9y7b6KbOyTUPJZfjk88/ym/9zhcZzqbc3OmxuFwn\nDAwqRQk/8uiNxnwtCzEbebzRHMnI0V5s07vzEM8xOffkGaqugZD6/Bef+cfIYkaczrBHPmkWk6Uu\ncyug2lhH12VEQSCIVObDHZY2nyCTVOLMw7Nd0pyALIm0W8uoqsLf+/v/iMlkTre3Rd5cJksm7O3c\nRTfq1BZX8a0Ow+2rrOee5mHcJ/bfxOm57Ha/RBQ1uT3aQsrVKOt5JHmEGGxSrh4jVjQUucqzP3qc\nyAnZHd3BUE5w9Y8Pvl+uwA+wDRMEIScIQuHtexP4KHCdtyLxfu7tad8flffTgiCogiBs8A5ReUkQ\nkkkSGREKMhM7QdZkMjnB80FSI4RYprZ2nEl/h9BN0HMysgp7u0eARBQE6MUCuXqOKPXQSjq1cglB\nK2HqKUooIgt5KqGMLmpYgxChKJI6EVZniqQJeFFI756LpAjsbO/wf335d3n59lXevPMKS2tlNF3C\ntRySOCbXrHOw/4DtvTGnz1zi5Omz3H3zCm7ooogSiq6haQb55gKjucVKoUmShex0DuhPxmwuLvD8\nxU1Gu29SVUSyLCGe7VISYxZKBt3uHYazEW9+77vYUxetWuLg8Ag/CiiVFzDyZQxDRpFKVNptFAkW\n6suUGg02Nts8/76zPHbpNMfO1mhuFpHyGraXIsgiYTpn4ewpBgeHRGGZue/TH9p4acC93R3+53/x\nh2SKhlkss9Iqs7dvUa8WKVdavNHbBVUhh487mVNrVVhZruMfDWmt1jj3+KP0DjvUa6scr5+j191D\njS2i0EdURYqFEknicNS/TxCMsa2ALFOJkpBiocxs3qF/sM3R/n1s16N78ADbHgMxoiAiihKyqKAq\nBWQlI/YDFpY2mDtzbm19j6vXrnBl+kU+/uxfwfNu4w9SvB7MnIj9gz0yb0S1tMHGwgVq+kVEs4Bh\n5DmxUUM3BR6OrjNNtjhzJsff+dFfeEcv/CArSwv4nCAIfzb/32RZ9hVBEF7lh4jKC32HKKqQhDGF\nvM5wGDA87FFq1AnmPuOhRb5UxfcsSvUWzmiEHCuQSIx7A/R2iak/ZawI1ESJ/cEYTTZJdQEtLxPZ\nIaGiE7kZnXKE4EM7l2OScxCp4LtzivISWTZnbk3ZG3TJejquGbK1d5swjfjNf/vrfPiJ9+Onx3j9\nyi18a8Z4ZNNaaXL39i2kzGM2PKR97FFyRg01ZyLJOkngo5s6+aSMZR/RsUY896Gnyew5t+9cxpFM\novg1ps6ENIbtmztMv/gin/jUj7Cztcf5i09hDQbs7u/TWllFVSVm3VtIeo1edxetpLNUf4z7e9uc\nO3saNReSkw2+9Cd/RCFfJLk/QKrUCMWEk0+sMT+wqTYXkWsxagcsP0KRdFIh4taWDbGLrEhomkIc\nSRyOQ546f4x9r4eja6w063T9OUEaUK2WGfkD3H7E+y4dpzNL2O3fwnJE8mlG3cjTLhS4u3WdxaVF\nysUKqZBh23PqpTJxomGWSljWEG8yQM/nUXMN8nqKUSrTaLa5d+tVYn9Cub0Ob5+zGw8nvPrK16jX\niiSxzWAwRFFN7h7cwZJm6MIlvvDNf80sOyQIiuRzb50nbLTz1GsC+fwRR8MjlnJnGFj3ycnHif0x\nJW2Bil5FlQ+4/71d+mvvHDnx/2uWLMu2gYt/zvsfKiovSRPiKMV2bFwBmssrjLsDcqUM5JQwFslr\nKZOBhyBpJEKOmT1FkiS8yGGlcYZMyki9GTuZSF40sDKP+U7A8UvrTCZD8FKsXIIgSQQ4iKJEQaoS\nSj5SwyBnyMwfeNRONAiiGWIpZrHVJEx97r8+5lAP+cKrL/Dkxik+/PGn2X64zXKpBamPM5vy0vde\nYbndxAtCBFEhiSLspIui5hBVBUV6C5r99MWnQDhgEIV8+NSHuLf/GuPxlEwBxSihHq/ysZUTxJ5F\nSdLRoj7XH9zm/R/8GDdvXaXvzilVS9SX8kiiTBwJZJrD0mKVF1/4PE+95yPcG90gTsbUqlWspTXc\neUZaddhsLXHXvYdUHpLXSsRqAb0h4rkOSmLQXjKxrYxCtYAUa6hljcj1ePXhfTw1YdzdppSW0Rd0\noknEzB2jxwpyyeDGvS6ylJHKEYov8aFLP8mjCxvkSjlWtUtE/pj7918nFXRUrUxOE3F9C/telzCI\nmPZ3UQt1rOmrbF54kgsb5xAFBVU1SDKbwPeAMbpWQMDixz71s/zqb/wDmgunOBgfIGg5EjVhmAyp\nlh2caZFc7jEeeeQJjNUDiPbZ35lSWr5PXtVoNmoYyYjR7RFzzcEXZ6y12vRHCc9c+jGq1ibOg9vv\nqNl3j0gZRPiuQ5JlkCgYukSYFwl8myAKKIki83FG6EfkS3kEOUHwElJVJs4kHGuIFbkImkgiCjhu\nhG5oFB6pctTtQeqhZSqCoiDGPpmlIDd1uv0ux9dPMpHvMJ4MkEMTYTokkOsE0zn9u9sc/+BJimfH\nlNQ8g52Qr3zjNp/88RbPPPUMN27eIEolDgZDaqUie3uHtFstNNMk8UJ0VSbKJGoLDVzHQRFUYISb\nxpxceYx9q8/64hJz28fa3Wd0FLD2WJsXL3+X5fYiadbm8IVXWd08Rb+3j+/MmXfv4oTHqa7IKGqe\ne93r9LZ2aGmbPOzepH3Q5uuvfg5TN4ms+6xoTeKcQqKFxKnN8hMtRDvg6JUOrfUaF44f5/aDOwgO\nrJzfoORLPLT3KOgKW7fuc5BFFEsmWgmUqIKWJdSqKq5apDPsIRkGWjKjvFBEzGQGAxdBkHnhxW9j\nvl9Bvz+ludxE0vLs7Yw5deo4h4NdTNmgodaIQ5tBf8Ty4gpLm08iiQ63rvwpw8VjFIs1ZE1HlZu4\n04c4ahlVnSLLMtuH93n2gz/LC1/7HQ7SMUUx49rWmxQ3ZTLJZlqSKXkKRSXP3hsy4/KcwdYBK5PT\nPDS3Ob7WZSzvU9VXGYVz3J6JubDJpy9+gqVGg6wsM+j331Gz75pZoiQgDCziMMYNA0QpQRANrKmD\nLEuQT5mPbSTdRFBTTNWgP3cwNAndrOAFEZpSplhaYGZtoQtglMpMp12URhEe+FRLRaZ7A9bPrhMt\nuUz6PkWjyiQYkYY1vGBAYbWF3S2gFiWKzSK1RxYIlYwTC2e4fe0mxZNFRs6cz3/2TY6f7HHu8Uu8\n8p3XSMQCK2ef5omLj9JcWiH1XCRZQ9ZNpEzFmri4dhcvDAiVPsFIZCDdxp+H5CrHkBhR09aQlw9o\nOybaxY/gpxbrBZk4XiRfLuBYh5RqZXzpNEkQsXPnCp39A8xNmV3V5s3BV9DKItZ8zJE146nWadLY\n50C1yIZTgnxKuSgSizl2r+xhxRlrawq/9+Vv84lPPsdk54i9oz3iLCVWZIZZzEHi0a4tsXx+md79\nQ0RlxLyvoJR8wjRjoV5CiGMG84SipTGcjUjVDK2kEscyH//wR/n93/9tKlKdrdde5N7uPked+5w6\n8zheHKJpCfNhiOt5CGqN/sFdXn7xy4SRD8bnWWivUq0uMxzvEsQJ9UaJXK6ILIucPn2RyXzKpV/6\nx+wN55hqyG+9/L9y4/CzeJFM013BClO8+TZ5VaF3bURBamHZfZ6qvY/Ld15nc13j5NlNvvD5WxTa\nBlceHHHBnHDj7pAfee5ZyvW/gM1fUeyRJTVSISEIXFIhJZz7NJp1nNhCHAYUCxUEEtx5SGiFmMUC\njjPAsVxqK2XIyYiORVXLISznScK3WMKCauM0CszSCYWnQkQtxup62M4MqSghhinzeUJ7pU7qadTz\nBjfv3CP/+GmixCTRbPphn9FOyL/4b38NfsZC8kOCOEWSND763scpFMvEgUsqSkSTGYIqQSIQhz6q\noZFmPm/ub6ErMqPZjHJtiUm0TzMQUZaOYfUgUI/ISRqYb/XI73UGnF14H2KuQKtYI2w1SSKf88UG\nWSoxcWw6mcW3Xv0KJDmSkkx2FPLl3a+CJjEbh4yqA9bma7xeehkzzGPlm2jRlPrzx1B7PboTkfOP\nraCbUyIzh58ITPwJWhTQ308onVwknkNn2COpzqgVF6kVxqi5Bv60S8cN+OijH+XW/p+S+D7H10xe\nfWOOUjyipit89Uv/J6GUoMoRlUYbreuCkafbmXDsZJN71y4zsWIarRYQUTvxGM4Xfp+5M+CbL4Vc\nOOYiqdusHztBqVwmi0Mce0g+XyFJE0pmkSSNaVUk0rjEpy/8NPd7r2G5IyaxDW7ANekFFJYIZBOp\n5uLPROx5TDYxGRY75FZE6ssKUtjmyZMf5D1PfoDBnVt0D/Z5uH3tHTX77pklTHBCiwiRxI8RsgjJ\nVBEMifDIZ0qKmjPR8xKzwyn5YpU4mmEoJaSGxqBzROzDMB/S3e+T1zOEioQQSkSOzNywiQKPcmCS\nFQUiTWDz0iaT7pggCxCmLq6lUat7zLKQymoR1w8w6yrTgxElbYGXP/857NkeTqaCoZBHIQojMiHB\nGo2IySiZNQRVZa97wNrycRzHIcHHjSALE2QjYZaFDPZvk9ZzrNaWef3aNyim6xyKPfpjid3eFNUw\nOX/mJO58RCr0ILJJ4hAhsJjPuuhmBdPQeP+5J4mlMd+6+l3ycZHtvQExIgubJncnd9BnCd8TDxAM\nBV0pMux3aOo55oN9ZFUmk8eo1SW+/dp9agtV5LyC6Rjk0yqiOufYiWPcP7qOJEg4vkWtfJ6BlxL6\nfSIDjlfavNr5GitqhevWgHrhSZbO3iUvNRgOXV4dfo/pgcaxWotX3niNYqVBfzplHPfpTYZkcUC7\nVuVbX3uZk2cfhRf/CDeFQFfJJxmvXvkup46f5p4/IW+U0HImm2fOQZIiSKDrBUgcAi9BzRUoFFf5\n68/9T/ydf/Rpjl3wsCcVNDNH0RT4yx/+Bcx8C2e+w7du/wHjESi5R7hh3yVvNDi7+AQfePRxsjDC\nTWWmg312e9o7avZdM4uYCWRpjGfHOKFHXtGISXACF4EUP46IQhfJF8gyCc3MEXseWQrlQpHu3KFs\nlBh4HqZpIoQWgS8R2BFO30cjJVfTcHcihDRkOpwTjAJ0SUes1JGjfUaDAXq+TaO1hhwf0B/N8JY0\n1Ejhn/7yP+Rg/zpSmmDUl0kcl1SAQmWBSecIHyiVanhBROYmRKFInImUC0XG1pRIznBdA0faQtcz\nthOL4x0VS5Ww7gRMitdgqcLGegujNyZLIx5c38LQSuQbBlQl4jShmm+QJRF+NEeSinR725TkGpVi\njd5khlSu8tjj57hz9wpx6uL7IpKekUkZdm5KMV/hKH2r+7AqNzlZr3F1eoixohCnFhWxiV6Imff6\nhLbH9777Eu2VChtnT3PU6zDZ3cadexQMHdeeI2kehYUz9Jw9Ti+9B88fUdDhcLePHMjsbGU8dmYT\nK5FZXFzj6PAhkiTQmUdI0ZTHn3mSu9sPGU9T3vijr4OasNYq4HgJm63j1DYqHE1mGM6MSHhIvbiB\nlqshSSGyLGMaBmE4othYw41jBKPEUiHH//YrX+fX/uTvsna+xK07NvFKQD4T8MZD3NmMa4M75BcV\nqq0nOZzM8Sch5953niAZEDsidjAmE2Munqq/o2bfNbMkGcRhwNSyif2IyNBI5gKpkhGmHkkUE8Yx\ngh8TZDGyEqOnBsPJDNXME0YJvemIRIupFBc4HO6jBR6BnyA4IgtPLzPr7THPZYj+mEJS5OSFRaax\nQ9aLKD+yie9E9IZ7yI6IdaDw3vdvIsdV/vNf+ZvceP06nfEcIZYxKzaPnL+EkDioikKp2aSYJPSH\n+8hphpqLMMyUhzuvc+78B1DDjPHwAbO4T88ZMLMTNEWnWl/gzu4NPD1BLahYTg+/P6C5WEHJQp49\n8xG2Hjxgikxv6y5pGDCYHbCwtk5NaDLD4pknPkBrqczU6vPV0bdZWBfY2XsNRUlZO7FK92CEWlIo\nKxHjUMabWei+Rmn9Eu7gIXdGc4K5Q9yO8I0cJS9ArgiU6ovIhQlLWkSnJ/Lg8n1U3UVQl0GO8DOX\n5dVFnEgkdafElolZryHJDqJXY6YbdIdTcBPsJZ/XH2wxPDjgYDzFSCRmqctCqY5t9RjObLYe7iKq\nEqIbY1VKHI6nrG1GHA3mnGisUSjVmIZjYlEkERWmoxFxEiDFAb6Y0gh1iuUEzQ+4O9lGjGV+5pm/\ny7V7LzJpPSSZ7nB163VyepNyroa9ZRDWIWr0Ob/2l7jq/Gv2776B0c5TWWiQSSoLS5u8cf3Nd9Ts\nu0h3sQlin8DziLOEILJBTPGCGN9zmVsuWRoShwJxkDLt26RxwmxmE8cx7eYmai3HNJsiqAnVSp7Q\n0Sg2Tc5+8DwHg7skhQQtERnvOyRCyM0bb2J3XQ6GfWYTC7MmIcQikRwitAS2Hoz5Gz/289y9cYtr\nb24xndg4jsf+7g4vfvPbmMUWyduHp1NJpFxsYdYaiLkag/EuxVyTr37ld7Dmu9x48BLtSh1BMwgn\nNrEd8YZ3G78iY+s+SRYyn/mMJI83pkfcPDjiq1dfYJC6WNaUrWCHI8/C2DxF/2EHS52xsbjKoDdi\n++5l5pOIoraBjE4wDsgSEeegy2algTov4U4UarJBvVkhLPbobT1gPg1Ze2QBpSUhaTGGDa40I50X\nUIWIjcYSo8BBk1KkSo6LJ9/L2UUTa2qjmiXqukEzENk7GPDXP/rz/MRjn8KfpNw77DOdTlGLBZS6\nwHdvvcr+3ha1Ew3m8xGuMqV0PMee3ePqwV26wy5BLOG4c+w45cHuIc89/wwHRw5GIY8T+hwOXkAL\nbLqjIZZnk9PzmIpOpClv/aZPIsLAY6v7BjeuvEHfTegND3j/Ex/lb330F5jZLbYme+z191g7ucHT\nH3w/J5YuceUP7/DiN/8Ntidwb36IN+hx7daXiJOMh70OJ0+cfEfNvmudks8/+SiJIDPsD0CVscdz\noixFUXNE9pg0k1CUDEWSmc1nGJqGCEwdC1WQ0aoqveE9xpHLaGghyioCMmu1RzgY3GdNW8IfS0Sm\nQr5ivlWjkQVa7RZuGlEpLeAGNpKQYncd5CB9nISSAAAgAElEQVRB0lWWsjx/9PkvcfTgDapLZ7Bj\nKJbyjMcdRr0Bq2srvPbyt5BlFUXTSFOJOB7zsL+DJusMJ2N+9Qv/FG845UA5Yni3j1IycPenRH5C\nppsIYorvg1QuImYVorlDJqqMPZtEi3lwuEW7tspecMSDuwckesKwOyOXyHipz2Dc5dWDN6m2NvEj\nH6mRkG+1KTRzOBOXsxunWD1xnMPxdXQkcmadQPKoNRbY2x+x2FJw/QzbS1jQC4ztbRZX3sv25HuY\nioxSrOL0u+y6bzJ3E+q1JeLMJRYT7h/22NDOcP7YCpXWEqfXn+A73/0OzjggclyMcouP/9xzvPKN\nWxw+7LBxeoOFC+vghmhKwnh/ymTmUyga5DaLmEWTyprBtddvU6qoTJwB5BK8Ysx0GqNkMcVSiVK9\nQXfUpb2wQrHSII0ddqyb3Lt8SH0TcppGfWGNVbOCIMF/8PzHeeLkBT731c+CO+Xr33iRXEumuW4i\n5VTWy+eYTxPOrD+HuPIenr/wNMvNFfScyW/8xr/8czsl3zWzfODpM3R6M+wgpFAwSMKITAQFkUyI\nmY1mlCol8kWJ0dhDNiBIE5IoQ9be4o5FeokH+/fQJxnNQolADqlWm5SVBKGaQxnPiRNQFIm8IBBI\nETOvT41llhSB7niEF0wJfYn2sRqLfptb1x4wtAOGk5Q337jMoHODVnuRemud+WzMZNDjc3/yWYql\nGtVyjd5gn95gyP3DOxSEEr/3ym9TpMCcOUf9MYKksGg2OdqesfTUEoai0t0eYM0sQt8mEmzknIoo\ngZzXmEwHNM0FFEUhNUPWT5zAOZzgpC6v3R2z3d9jnOuRmytkbg9XkdDFIkUtYz4ZsLHxI3RnV4g9\ngdBTGaYdmgUFUVGRxRFCDNWySaBGFGshiSZyvPkk2/uvs9Q6xeHhEZISk1tSUBMVX/FwsrtkNMmZ\nJl404D/58V/gmQ//FJHd472X3kuj0uL+7CGSkBFKLj4OxtihdKzEuUffz2T6kL5vo4kVktQjv1Ak\np+TIqSn1c6sUWglCYJBoAr4bMvH3ufD8++hdvUlpJY/tBAiejevOGfoWV998gc2VBXbHHrvWy3z7\n5deQ9JiiAN3BDmN7RLNUQNXLlAoav//iKyyvbrJo1Pn4pb9M1NU5kTvGX/34T2IoPucX18lIyRk6\nX/vTf8WXvvq9v1is4//6P/00tx/MkESFSs3EntoUKiZZIiDLGd48ZmWjTbOSY/9gSr5oosgJfqRQ\nKuvkSiZjMq4fvYFPiN07Qiktsrt9m4986gO88sJLFOQckSSTJiMmDCmqdSS5hJFv4HsB9mRGQoQY\nZJhlCN+UiLMYJZUIErBGMxIpRhUM1jeaSIisn3yUuw9u8ZM/8ZMMh0OSxOfq0VeBIkWhxhuHW7jW\nhJxeQliwUEON3gOH2mKJw7tdcoqOWDTxhjMKbRVjs0Q5K9N3+gizALlmUDMMRpMxUSixUW+Rq9RA\n1Wkvtjk63OGw1+XejftcOrbO2LKorC9gVlUm0z7lVEROSkQFEcsbIWs+unQMKbAQ6hH9joMuqGSq\nQDyxEfSMABXmFmalhOPPkFFZaDTpOz1kz2T1+Gm627cgqTDtdyhHBZZWqvztX/wfaFTaXLv8JWJV\n5He+9Fl6Qh8lNlkumaAo9D0b767NWO9SzldYWTzNzRtXECKFxJD5xEc+w+99+ze5dOkUOzd9jKZE\nTp1x71ZIy5Q4tdDEephSblc5tXSaL9/4Du89fpzNWoMvHP0hC+r7uWFdxvRrXGyeImlLZIHLEwsX\n6dsJdgDrayuUqyb1fJVUzREc7XPz1mUWTzxGMJ2Syho5XSbWwJ5M+Kmf+Ft/bqfku2aWf/if/TTX\nH/ZRgWajDpKMHVjoah45shmPHVZWG2wst9jr2GiKTqEi0B+7VAsmhUYBG48bk5fZcW6R+GVy+gqZ\nbhH7EtJYxjcjxrctjJxKVonZNJtYRZcF8zSDyS6apuH6IfOhhWRKZEzpfV2g3iySOgKpmjDpzomU\niFzBpL3WwMwUFlZWyFwXxTS4M7zBqVMX6IQP2bs1QFdVRhOX+nrM9ms2jdM1RnctzLZCdD/CjeHs\ncwtM7RmWPaVxeoF0luDMEnKlGE3R6Q5szCxFFE268xkVSaV97hijvW3KtQJu1MMay6y3TjK3jyi1\ny4yTPtEk4cTKeYbDXUJRRhRiegdHKLqJUc2o5zbwnXtYsxzlmoahNShVTuHM3kAwc0ytA4pGAWs8\nQjaqDIZ96rUF5n6HkqlTz51ha/pdBt9J0DBY0hf44NMn2Hd9UknBjSfEORFL0inkDjB6TR5OHlA2\nl7jzcIv2mZO0VBln4jCKbCwn4sce/wD5+hIvPniBRjVjd/cQd1qgVU4p13Qq1UcZ37/D5G7A8+99\nihdm38XNHBpIGLUNgtDBkW8id6r0pX0a2Tp6c5WapLCxehp/N0ZdktDkCoZkcnT/dTZOvAdxFrCw\nXOP1l19hlsK505usbJ5jqsh86qmP/8UCg9uxRJJkRGFIFEYo+RRBEEiSgCQIEQWB0AuJUo+hM6Va\nKSPME+IgQiwUUTNQ5DxyWMDvlNlcXsURfTKrQmSHjGd7lMw2xYZKZa2CN5lz/fCQ9ajE3O4RaiLZ\nbEoQ+mhmnvl8RL6ZIJ+UmXVGpIGOGmtUVsoodZHhboc4KeKLKbfuXWYSQeJ5nHzPJQ5nO+zvHWLk\n8/hejFqaM9iFY++t0Lk6pdZo40/3yZ9pUBYEYlEiFWPa5TZCmGHbHvm6QeLLzKwpZiYhlAqoccj7\njm2w3e9j5ANkVWYezmkUiljBhI59l0zzUUIFLTKI5Iib2y+zduIMipOhqTqnN55ByyJu3L/GfLjD\n8eWnmBc77Pb3qUh5+ne/TWbOcPcyFpsmR84Qrx+iNA6oVY+RSg62aBMnKWE0oKw0+OV/8otYocfc\n9SiXy6xIBl4Q48QWN7deotAXiYMqE4YouSKqBvXjDbRcyOGhy/GNBVryCbpWl93gDtLhHuIs4Gg8\nxJk5BEmEVnof86jD3ksv4lsyVV3kJfHLaMsyJ8xLzOcuE6+Hr2xzYfPDvHL0Kkq+SaN06i34+9xm\nZ/s+kZYh2kX2Jy+Q+Bb17DTF/pDxcJtXj3a5e8fjwvtOcHfykGjPw82K76jZd68oGSQoCIiahOXM\nSMYRRn2BhBjfs5BkmRQJAUisGL1hECd9iEUEElLRwM98tFTh2NJJXDlEycpocoi+VMVLE6LUI18v\nYh3NMWoVamaBwAA37mHtzDh26jST/Q6abyHoIIg6y5LKMOeTqCLu1KJybIXpfpd8u4ATT+j0QzRV\npXCyhHVNoN2uMJjNKVfyJLFGIO2hx2UaJ1Y52L9NaXUZu3tAWowY7w1onVng6KCHIrr4io82LqGs\nDEm0HJGXIlRMjLBM4AikUkjeqGFKAftXZ6ycLDMRJeww4GSxTGFBQQgKTNyESrXIpHOPsLbG4bV9\nqs089doyl698gVCKKMkV5mLEK2++wuqFOoVmgcnkAbK0iOfKqJbJ1rCLaebI1AAlStHFGVYgoWUL\npK5GrdpAUFT+x1//ZyyXauhakU53zCNPHsezHchDuVznq19/jWMn28yyMUqi4SohyTxhfWkBb9OH\nZIRoLFAVazj2DqPJmPZCDdeKWKts8GB7n3nvJma1jF8TkU0XaQ1y5pzAbTIbTdEqVRJhQppV6Ixn\nnD/9OGv6GvPgHl+8dpVjK2cQYpvtw22OPXIcMZBIbJWj3i7bt64i6G3aywKXnvwQ4ajDXAm4zh2M\nYOMdNfuumSVIIwzTxJ2OSBBxgojYmVIq5AjDFEPKkJM5E1uBgkCSzcl8/y2Wbhhi5vPsjfZo5ptE\n3iG1UpX7swPCvIY92qGopQhyjorYJt6QGE3us3lqjdt3LyOmRVJDojOaEQcesa9iFHTsvYB8LmSh\nvsL0/iHN8006nX3EEBRZY+JF5BcMBB8EWaZ5WiHGI/Z9yq1l+rN9cmYVIy3T6dymmK8i5H1kC1LB\nRJVVgqGFkRfQNZkkElhaL5KkeQ46B+iiQZJkeMmMXKlEsBcwnk9QSNBNi3lPo7QgMxr32POKrJRb\nRE5GmHQZ7t9GEgScaYhRFlBzAnvdHrKSokYQKyn1VomoFLGQX2GSWGR+QGabPHKyzTfvvU79UZOk\nn+HbCo6Qsic4eKnN6fWPMLYe4MwP8VOPWnMVRQ9wZYnWpTXu3dphLk4paTnm7QFnL1QRdAlnnIfS\nENwGQjFk23qNUA2ppUvo1uvMLYFJOKKuXcAOO6iCQqLbXHziNNsHY7Swz0ojxtTPII/HTKQCia1i\nMeXu0S3WGxeohdCd3WYvtXl5J2Ulv0Cr1mamvYklnCVQPLyOjxBFLFVPsWPvcenEeUbCnIKyQijt\nY+s+R+E2BaeFOf//tF79u/Gu1VkmnSGaqaHmIBJDTAmsvkXsOSALyHJCFqVY4znMfVItT6aISNFb\nhy4VIUNUdDRRwYkn1M0Wx/InyDkyaqFG65FTLC4uM5UOcId7qFmVWe8+YQheFpJMQtLpnErDJN8s\nkssXMFpFSmvLDIIDjLU8Xm+IrBtoyxXCXPBWKKqX4fkuywuLzHop05GDlwbYkwHIBYZ3fKKpRaVe\nI7Ut7KmDmqtQXlhk6byBVBcR5uD6KYnmI5kCPeuApx8/hparoihFHCemphvUNtZwFYP9oxEtrYLb\nH7L92gFpLiYauxw93EEtZjhpiqzlEZISmpJD0TfxFI/Ih0AwCQ2J47UTyLaPM0npWw6ibdAfOzjO\nfa586yqGUSAYhsSOSCor5CSdbGZgeDp3XnuRcDpk5g4IHI3TjTMMBhmiH6KNYpaqMhpFzjy6iTd1\n8PMethOTEZBo4CU6OnnEnEDUqULS5OGhg2HmOb/4HKYaYaYuk5lIvzcmtDJU2cYI2/ScGQ/3dum7\nDoPDCUpzgR/7zM/ynoXzjKfb7AnbVIwStfyjrLUXOBhHeL5NPdpkuVAgVzC4NTxie7hNaqesljS+\nd3SHnK+TWj3M5gZSRUVSy9y9cu3/Zu69Ym1J0/O8p3JcOe6c98m5T8cz3T0905zhDKlhkEzRkiwJ\nEm1Dhi8E24ANmzZvdKEbGoIASgBlE7ZICqLHFOUZ9WRO6Jz79Mn77JzW3iuHWqtylS96ZMs026I1\nsIffXSVUXbwvvvr///u/h7766Zr9qWUWz4+oiiLHzgQxlvA1AzsDvhOjyxK6riJaEsPYJV8sEHUH\npJGDbJjIpCBG5ASDsZ6SNFweHD9GEHTcKEVOHRw34vHbjynOTdEfj0m9Nr7rM44d5qs1kqLMSWMb\n0S2yNjvP7bsfsLg4z/bmBlq2RLvbxZNdRM+kUi6jTU/hFseQxoi6TtA9pnDFwozG9BURwYsQZRMt\n9nE8ASHWyJfrWKlC9/CAQBURwwg/cIknAmmcUizUCOM+ai3h9Qf7+KHPrDVL8XyWcBjTPTqgnFtn\nJm+S5ANQ86T9E55b/4u8aX+fp7VFPm7cQVYzWFqVRuMITYGo2KWQlvAYcvaGTOs0ZOS6nIohzjHk\nJId9d5t4MCGRakyd19m818KMTZBU/L7DtVsv8sEffQNj1kKZGAz2x0zPTCPIPbQZjeDjPpX5Vfph\nl9XKOhP1Qzr7Eq2+x2JWZaIcoocqYz9mEvfIFXySYYFCJsXQEkxbQZAluqcbDIpD9ERGkRzKJYPj\n9BA56LPd8tHLNey8T+BERMOAg+8+4h+90ySYOkVqwsKlPMWKyOTQJymtsJYmZM2QVNURBIXP2td4\nkx9QzJ5B9GSO+8eYeZOe0aY8LvDwwz/mYJSQy4UUL8xieJ9uiZ9aZglEgWDsE7kpcZgynjiMRw6y\nnnDSGxFEYzRLZdLzEVKZKOrBj1duNV3FNDP4YYygaiyUV5lEEj1nm9Z4HyWXY3DqU5mt4qcxiqLg\n9HxkYuanztB0Ryiqh26VKJZrBHGCUAgIfZ9CxaYsq8zPasw/scJn/72beOMej+7cZbh3TKQFaKqP\nMw5ZXKyxd3QMhwPEis5od4uFm+d45uwt1L6AreqM+yfMnFvj3LUVTCnPucpFKrUSZ25ewvJkAjmg\nJj+FYFWpzc7hBAOIygiKzvoTZzkZPWLq8ixpFGDhsVBa5+OH3+OlS89y/3CTvhrQ2+rjxyGWWYIk\nQZJDDt+5z+xahscfH9FtThimO4iTGDOjc3pwjGYICFEWrzlif8cBUaXkS7zwlS/w1//Of0YhVVFz\ni6hyhfXZOv5AIYhgKAW8e+9bWDM+D3uPYKxwOGxiaHOIus7zK1exkgpOQ8XIZ8lHKgIe7BXxm1my\nms727j3S0GRr/wGNYZeLgzp6PCGXsRk4EhIGVnmGhUvrSEGClM/R2PeJ632mbiaUzwYUg3mKMybj\nkcJbX9ug0QvYvPOAQfAGu3uHPG4fIvdbKGfWcQ2RbHCOnvZ91LyHUojJZqdZv/YiR3qLqTMlJF0i\ndV0Ou5NP1exPzSxiqpMaKb7voxgKQpoSRtBpNjF0FUVWSUIfNWOSJAqTQUwq+qTOkDQFQYipFLPk\nczVswSYYtXHGHrZSYfveIbEbY2kyeUNGFnUKUypHxwLjcIA5CpC0DPlyHkHp0nNPKVcuYWZzlOuz\n+EZIbJtEUcSP/vBfQOaU688+gz1bI5caNMMjJLlIKZ2nvFYlimScO6eYCxJx4PHevXeJiwI98QBj\nykTLicSyyVBosB83qV0uUZUkVteX6TpjxsEBljtCDXyK9RxResDi2jl6Q5nl6jMYikzlzBUyZYOx\n3yFQBX7w/lt4+ZQ8JtWlJVK7ycDbQrSLeL7M4q1b5KwCK7NfRFOy2OIZ5I6OUYuQSylG3ac6XyWe\ntsnJNk9eWaD8xBOUlRn2tu5ye+/7IA3IyDJH7UOsqka7d0ywb5Iv2piZMzgHMsWqztbhY/J1CyXa\n4G76gIyZQSvFuFFE21TQ9Br6vMvopM3RyCU7VSGatIlll7E0ZMQYvyfiTXSmsueYDEes1m9y2tgg\n1CYMDpuce2KOnLhKM/Z48PgRSrbJaOgw6ghkZ1bp+nvMLcxytK8z6oyZL0zRbYX86KN/xkp+nUfx\ne0ioWJZNPIhp9vd59eErLFp1dMHDCR2UwGR5ZfrTNfv/oz/+L6GKY2REZElBSFIUMUWXVeJYxHF8\nJKDf7xInCpIcYWk2UihimAa6LOJOQsbjMXoiIoce07ML1CsLCCZkMwXOXFxn6Pqkikm2ZCDnVdZu\nGETxgOJsDhkZgZj9fRdNTTk+3eK4ecjhzglh6NHqHJJVTKqZKiJZJEIKhQKnwZjMWMWPxuiZkOdX\nX2J59RLWdBFJteltO5RmcvgTFz35pE3qqNXm8R+8g+RZ2EpEu7PFg6P3edC+TVbNcXqvS6Y0iyWa\npGqEasXsHL1KzuoxOnnIceuAxxvvImoK2ZpHRo64tHYNKZYo5hd4Zm6RmeIqucWI689fZX7GRlTG\n3L3zEFVyMUIDvZSDaoihm5iagbPnI6saX7j1y8g4bD44wbYkPt58m3bvAZKus/ODBo3DHs2Oy+r1\ndRZXroE8on86YefBPSQjodMK0JQsO4192opLARNR95hEMWYiYCYuerZD6IRkzsi4Q5fIayJkYnJR\nltwoT1tMUeUSqj9g8/R90sjnw8NvUajnUTQVQVNo9HuksY8W51hfuU7OmmNmdgXZKFIpzWIr06Se\nSsbUuH7xKSxLYGS0yCtlLueeYDafJ68tsdtxyNoKhpHQE07pjU+JJIucbFOZq7B1+unbin9qZhEU\nHd8foRVs4mhCmCoodoykqnihRxi4jFwIRskn24bDNqjKJ4N8EZxgQOD62IaIrmaIxwHjsYOlZ3Dc\nNvc/vkugCYTukJJZZbq2SKIWqE8tE9oCSTgm9NtYiUirOUF1JbLaDP5kSBi5+CcqgmWRPbPEhXPX\ncKM+XXYhGSFmSqxfPM/O5gGpO6QyYyGaMVY/pFS2yKsGM8uLJK5Hq+Hg+x7Vq3n0qkqnM2DYjnDd\nLHrOQJVzrF16EjMZESh9ynqZKC1gLm9gzJa4cf4Gi/kVztZWUMM8lcIUjqbx0Vs/opKrkFE7fNS8\nhxfcY6F0kZ173yXVI6bsGeyyRKPRQMzWmHQPqNs5srpOrl4gY+XIWrC988f8xV/9dQJVouftMXD2\nmUQ+tmZz669dRHcSrr9wjSRw6Y1PmEQaw3EDLc4RezEnp/uI6GjKIp6j4SQ+B902tZKBryqMJiKN\n7gCzFpFDQUCGaJrYzzI1vcK5a2ew9TKHExdfNTH1HNOFZTRhkWDSpKAaGJFNMgbLLlHVLDx3yF7z\nHgVrgXx5hJSLiRKNaqHEfK3CwBqjEmFbGVQjyzuD9xhKHsOgSamsockFxqM+QpJixFnMsYNZyJBK\nAZeW/hzyWYoFjW5nRNFWiDBISVAkDdIQRVBQsyrhJCKIfMI0YOIkkIAiKUiagm0USYSIbq+FJ7uk\nyIiCTRAl1FdXKczWCCYWrd6Q094J46MxF0rrpMMRvt9EijzQ8gRpQJKkGMV5GkfHKJJBNlfFWl6k\n/WiXpchn39tFECdYk4BMUcC3PDrNO0xVFjk8dpDjBDlWUVam8LVTummMoA2wFYPZokEoJHhSQG9z\nhK4HWDmd4mxK2HNJfWj3HjIRhsQ9gaPTDproYHVfxNILGLrIkXuHY2+Htvsxo9inlrEoTeVpDrr0\nfBXVrmIYWWIk1EKV4Waf+7t3KKs53LTNejFDYWpM3phlOjeHmtgszK1xctLDkUX++bd+k8tPztNs\nHtIfj8lYBmq2iJlNmPvcAq3mEAoao5Mu5cUqSjaLMBGwC2US1ebi+vPM2DNMW+dgmMOnz8lJhOSL\nzJRneOHCS5x0ZfzAxYgtIjxW1pawslXeePNNtrqPmS7rFGYy5Es6URoy8I/xE5MgnyIIIUJgEPcc\nqrkZpuwFol6Wne2PGI7v4EbbVHOQljaR7TH12gIHJ8dE6Smt8QZOT4KRTugKpF0RKU6RrFXms3O4\nYotYj5CjhMid0I+TT9XsT80stqbjRyH9VoBtB8hGgSgKUU2FNIgRUgE/DtBkkXgQYRd1DF1FSGMk\nUUVODOqlaXrdHnK2ykLBJg4cFLdHmhxxuPeAQl7ElC0kOSZX0oiUHhgpZmGesRMjDSNSX0ZKTAw9\nhzWnYNYNkBVEehhlg41gSDoaQiLg5mRsW2TZmmJAh4O999kU77Lb2CSRXYRUZGCkSFqP0IMkHTPx\nBmQTH8NQUO0BRdFGYkgYStSWXJqjbYziiGbDR7ZMpqcK9E8iJu4GrZNNvrH5VeRggbEfUVQi2kc9\n1MIYR+ow6PbwxCE97wHNzZT26TGFSZXySoU49fE8i/W1GT7cv8+omccL7tIYH2KKDluP71OvzuGP\nDdAEbv/RA3LFAqoc0xuPEIYe3shk80eHBOMxjfvHXH3qBi/d+gU0LSG/ajN2XbSciJQmaBmNTL2C\nKCeMRgk3L3wWMzuNrNhsHu1zoX6D/MJl1p+5Si1XoD8cM1Fdzjx1kbn8PGGa0t7xcfohebOKEYUk\n+ojTnRa6UeHs6hnMxZRWt4nt9ZgzMlTqFRYyn6PsaRTslMzpGrFXRkw8MI7xhRRHDondIWVTQJAr\nTGWmmWgRehKwv91gMpjDa3fpTxyOvUcomT+HA3xv6CBJGQIhxh8amJHD9OoKE1dCzZt4JMipSJLG\nCMSMGgPSJCFjFzBUmTSaYCkaiedxfHpEZxCyXJ7FEWKStMT8wjlKQh7iFBGR7f0Tglhl4omYcUgs\nJMzkiihZicGkwai/gxgruM0x8UGf+bk1xr7CYNClnr/E2HGxzAyuL+KmEmIa0ZUSQqnHyaBHtjzH\ncGePtcwVnFRFCQMkKUupOM1EivDlkPUzFvNPljHDAnaph20WsIsJZj4lr1dZLWepST2ytoG2eEJz\nvM966Wf5wgtfYjkzz6Mdj6LpsPH6Ljld/4R01iqipxJOGGFPFRmEEUqaJa/VeO6pKzzePmZ2oYgn\nPcbPW6RhyEQYsXLhBlJ2TKmoYeZPEddijo9P8RORhdnPoKguuQWJ9VuLXH76HHZF4I9f+T5f/+1/\ngBQm7H90ABjIgcm3XvtXfPDq63z7n/+vIEXkxCwfPPohkR9x2mziSTKHo3tksgr37twmm1WIXB9x\nbKHkPDzdJZJtrl3/Eu7kgMe7e4RJRF6fY33tRbzohI3eR/RbVY5jncxTz3HpF3+VQ2ePqZqOUFQ4\nHh7y5vBNUtVBjo8QBzWW3FmmFI35C1W8jkHjjxt4bp/+XhNVknh25XmWrst4WpVU66EEIuao9qma\n/amV6L/8hecRlAonR4fYtkogJciRRtZOQMqR02JSUUdCQNV0/HGfmfkpRESKpQJhIhBOJtRm6jzY\nO6GczdN39xmKIcP9HU5PmzSGA3RNZeym5MSYw8EuaS+ksSuTzUlMJiFarYAk+kiRQz6sUD6zSqez\ng9cUyc6JdCcPGZ9M8CwXGRs/GlMpawzdJlPZEm7ik82OUWIZ1dBxugH1TI4wlzB2fIx+jma/zdJC\njX4UMOpoGHFM2NOozdWJpU3K+g1C5Zijdg9JylEqXcTsTpGMEnqNbRrOBnutFjIpkp1luhaTSyuE\npsAkPMQbhiiaTokKjuDQ6fXpdx+wd3yAmHo0uiNkf4wiK9hmncPWMak6JuQUN+0zjPrIHljSLMvz\nVd7/9hvMry8SjwKGEYxdFx1IU5EbL36ZfrjJ+SsvQbBDOLTRLYn+TgslkxIqHuOejKaLdManLFWm\n2PjDE4SpECXqEKAQ9j0iQ8IoCpTSaXqDPuPJLqPOHnFsIKs+g8MAQ5pQU2R2ex2uLr/AfBlm555g\n7Dms1i8hOxt8tL1FYsb42hHXZi8giSU2v7tBrjxNRx7Rdly2D9u4yZDy+gA1PI9tzSDYm5xOPErL\nMdO5y8SDAVouYpS2uP+dP5389VPLLMcPNzClBvNzWVZWZ8nnZ3B9iZ7TI2o3yBWKlKtTqFaNbHXm\nk7l37ZOFppiUVAZJAtdxWFlYxlASQpvMp4kAACAASURBVEdiyVxFMUssLa0iE6OZFtWyjhQGeMmE\n2voa0ysGYk5EKmh0TvaIpJhEFRByEoPmY7R6Hsfc57i5RdArovVVdDmHSkihaDLutEn1EseTIZIk\n4sUK7eiA5rhHUu8xVDsoWoBZzFFeiVDyZbKzD5ibWebi6k1GGYH6WY3XfrDBTHiDNzduk8gh2UJK\nsxnS7NzmwdFDPv64ja5W6fYgioaYqY4/njBsxXR6Tdxxn0rJZnZ6FTlTZD9+QBAeUSsoZEvTjNo+\nIzViaUEl0ipEls3KQpXZCxMSb0C5cgUkA6k7zUBQqRRUAgFqVy2EVKE7iNl85Q7dgy7RKOH6zS8y\ncfcQRIt4PELVqkxwGRw5uIpLoim4jSyh7OMGAZYssd9oUrhYZrQHo1AlmzNQl6foxds4J/uMXZ+c\nb2HEOaxZC0Eb0z9xOXt9jcjR2Ils6rUKj07f4d6jbQwnT/f+e3z7j/5b3t/aRrcyFLQsfmuZaDTP\no3sfoE1nGDgBnckRV9avcGP9RRamzrMw9WXkgks2OMJcPMWYPqBzT+Zh4w3MrII3EbHVtU/V7J+V\n/JUH/glwgU864v9N4DE/AfnrV7/yLEHocnIaktNSRFkna4lMzc2SqhrFUoXmfh/VDpienWfv3j0u\nX18jnITU6zl8KYuYprQnPRRJ5tRv0/UajIYOp8MjPLnAWOqjy1kkb0CzPyY185iigiS5VOwCR8Mm\nARFRf0ypPIdhKIwDjyTsYFmLlGo5ukcjxu4xupmhnmQ4GO2SXTSZNF3KWo2B0aQoWAzVEWIyixSk\nVKcqBCcQ5QL2Nw9II49IDLi59Blio8fHt+9jZeBkMkT2i5yZrWNNSzT3VYb+Nt2tCeWrCba4zlM3\nnuRr/8vX6XljFhan6MeHCK5CfsEl7ZUJ+zqy6BPqAWlq0Ro6lBYUZC+mcyqQW7DJdvqEtspEdIkd\nj6xexvUcEjHGa+ko2RZiycJ3e7x06b/B0vLc3XqLg0eP6LYcLjwd0xmCpJYIWn0coYPfyKEVJwyP\nZKbOWky2UwJ9RDhyMGvzqNIJglZnemkWNVbIG9Mcbm7Q7e2Sq5fxkwnDHYFMrY29MMXkcEhSy5AM\nekxVV3DcIa46xBuqzOSLCInFTvcuWW2JUNgmlzEpmVdhdIhQO4/f2CFQTskrWQIhRVZj/FFKO+4h\nDC2M/A57jsit2pMUZ1Z476PvMkgHXKmvEyc+jxsS1bqCHwr8wX/+6p9aov9nzSz/AHglTdNzwGXg\nIf8n+Wsd+N6Pj/kT5K8vAr8lCML/7T2qkOCHCoatEUsx7bbD3mnIzvEp+3e36R3cJxL7mJZFnEoM\nhw6D3gDPc0iiFFFK8Tp94jRFt22c2OFxv0ugGrihQHdwTCbNELsB/aFIkIqoUsjI6WOFCr1xB0NT\n8AaTT5pkOLuMfQU9B7MzNwi8CduH77PfeoyVzdHttTiI7hPlQlJPRJ6OGEkN9LhOL3RZ0K+ylp2m\nopdJ+gL7fpOTzTaLl+fJlAUso8Ltd77DYWNEebbAmdmrJI6MqEtsbQx457vHaGZI6maxCjkK2k3q\n6hzvP/wh1qzKSy+/AGZKvTCFVFJp3FboBzGx1WYgehQqJUTBZ/3MInVfQvdrmEUFU7HQsgWy1Srl\nWp1MsYIsqIxbYAYVBl5EElU5+aGIFV4jQ0qlUODW+ZvcfO7zPPl0gWFfY732AjNGjVJuCk2X0MyQ\neJQlP2cy3jJQlnUKFzrc+rlfJJ8fglegaFfx0zrNnV0++O7XePnZFwg8l/7glJxwgbQ8oj+eY+d2\nC13J4nR7FDM1TgdHSG7ETf1JyhkJs35KTj5F1mTKRgfHC2h2R6ThMUf+MU7nAQ23Tds1mURQMeoc\njjaRtIS12QqGkcPMvEzZn6Ll+2ztfpdbU0+yWnuOVtjkyG1i5044M30eS4w/PWn82zKLIAg54MM0\nTZf/xPmHwAtpmp4KglAHfpCm6dkfZ5UkTdO//+P7vgn8Rpqmb/0bz6a//PIN9EIW1wuYjAc0Ox6W\nYVPJpAhRSm0uT68nUClXqdbn2b7/JlcuTxH6GsurS8SKwmQSYugiEPCD5i6J7DLpB+yMu0yXMzS6\nTSQi3MgjmkxItBzyUYS1qhD3PcI0gayELeZwgoQg8Ihdn2zNYNQJkZKUWGqBmEOORIScR+B6FGtV\nJN1G9lvIcpX1uVWOxy0UPyHKRmiBycS9hzsoIQkK2rRBfODTGnX4yy9/hY/3j3jjX76Kj0BhWUbL\nCqR9jcQUqM1EiKNzbD56gOA6xIqFNjfms+f/fd549E1yRZFJ7wTLXGfsRUxV5njw+HW03JiimSNs\n6pxbyLCXZBBHJzBTQnU1StmztMbbLFZW2Bo9prs3JBi3kEKX/MoSzb0+mUSiP/FRJjpRASxZA8ng\nzPIZ7t17nUJxhfdef4vP/MpVbr/SYP7sMiebO8xet/GiJkfHHdbWS6jOBSRC2pGKGo1wTo6YTGKa\nj8ZY1Qyy1SNjFcgVc/z8V36F3/sX/4jp+TqdpsN8dZ5gcopuFwm1DhP5hHFXwB2aOFqHXJTHVHQc\nJ0QriwwaHey8hqxn8OizYpWZTHI4/h6ZrI0pTSHrEwJvQCl7hl5wQnuSkk4U8kYPW6ogFvO47T5B\n2uNwb8z3f2vn3zmzLAEtQRB+RxCEDwRB+O0foyd+MvKXqtBrDFEEFVMzCYIJhYyJ74ekaQRpShqD\nIiZE4ikJKYpkoJs5It9DFkS6g2MEX6Q3chm2jykoWWKpS74q0Bv0SVyFQd/B7ylM+ir+cR9tXiZ0\nYkzZIF/JUhINlEhCzYm4bofFhYuIwwQ166PmcoQJhL5AZExIRQVVrRIejqjIMY12H1eQeXywwZnZ\ns8RxzOX8U6hKETm+QD7JYs5vED0OcDSYXh/y7sYJm+1vU5mawR+nPHe1z5R+AyNjUyjMsJS7RUVt\n8/T1ZS5cWsDMZrAUnTPLs0SJx84bDdonInff3EYLAwbtU7Sxjckq/Z5IUp/w/uiAnJXQE0U27z1C\nT8u444c8eO99lhaf5vLSczzz7HOs31giyuTZ/ugx1UoNv6uQaUm4ahetOKGyLHHx6gpGXccq1zi6\ne4RdLKCmNmaxSGVe58ZLF0iElHCkUJ6CorZGqAzpnhxwZu48mYzEZBQzGftMSEhzMfnlJaZvzFFb\nPse7t99jcf4cy6svM1+5zvbkMR1TIJo74WHzhKhfRtaKDCddFKFALVugH4WYRgHUlMXaLQy5iOsM\nqQrL9MURuhaSLa4TEdAa77K90UTUDY6TdzlttymreabNWcZjHSc4QtYUjFKN0PYxzJ+skFIGrgO/\nlabpdWDMj3+5/nX8GCnx/4r8lUQB3SBFM8cIKRiJSn84xNBkhCRCIsELPRJSBu0WCAJpIhD7PVJE\nxDhCMLPIYkAoSkgZi0mQ0PQHNO7uI4vrzK9OMTNbRS/Z5PIFZs6vQCoyPHZJNIHt3TZSyUKwbEqJ\nypWrLzCYBCh2gUw0h6q7KIbBqBlR0mfwhgEoLvnFCjuPmuhxiuFOuHnhc1R0menSHO/f+zbv/Oh7\nWNlLbEUPOHgY0WptI9sJufIM7d4HiIHFl188w+y1s5xOVO4/eI8L5XMUNAu37TBKAt56fIchAdcu\naQSBw9f/t3/K4ZshorpApryGWVHoNz0SrQUFkLyY+kydnDJLljXMQpl6qrMyu0YjusMIhWu3Frl3\n8C3Chsve1neQhyFFocqlazNUpyOEmRPEJ3xufu7LfO6J/4C5yrOEoYolqfzSz/0VpJk+sxer+ILH\nF375l7i/+SpS1kAT5uimO1jqWQ4PHvLg0SMse5btze/y3Gd/jpVb0/zKf/Fr/Ne/+d/xc3/1y1y9\nViVjhSxXl3Aab9G+v8vBd/9nWsOP0CWdjNLj3ncOuTF9hTSUqISgiyZGYLDdCyhny6hin1Ab4Dq7\n7Gwd4WslDprHCH7CsNtiND5m43jA8ajHajVDyjTlTJGV+YiMpfGo8T1ct4GXijinLVo7Lbyhj20H\n/49G+LfFIXCYpum7Pz7+KvBfASc/Cfnr/tYRHSdi1FOpVTPIGeMTOrEooEkKkqwgI0HiEU1AlS3C\nOMX1PSaBi5XLUbR12l5A1rCZt0s0xx1GD/r83d/4H9Bjjw93H9I63UNLdyksGdy918ARE9RQQ7QT\nqq7C4FEb8j7dSOdmbZFGsIeZr/Dw7cfcfPks4Sgge7mAlBgUtBKiG9DcdvCsDEpOp9HfRzM6lPR5\nMk89RW1hGX/8j7n80gb6myXM9EV+4HwPz5/QafSpziyys9NFqs7xl56vcZiMsJ8xEZKI4X5CR94j\n1sZcmc5x6o344D7U5s9xuL9H/YyEYI2Yqs1zdr5OT9lmIfs0588u88rH/ww8i8bOY0IhpT5XZ3/Q\nQU1kMpGJOOOSz67TO+hyGh0y8XVqM0uULtpE8QEbJ7eZrp/n/vcavPP7f8ALf/VJlit5bqx9jtcf\nfZ1ee5fy9BTCRCOvV9lqf5f6whTuSELPnWLvX+Fv/cKv8Y+//fe4fOlFFHFEQbrE11/5J0xPX6HV\n2OTt039IrRLTDTRK+gw7t18jMiyEFZWRIjM/ZfN4u4XunsXKbXFnawe/NcGtZ3D8gKV52DsZcNjt\nkNUyRIc6URKztrqCuzfAXswx6D8kmEwTBRHFNEuj3eLYPKB91OQzU5/h2L2HVXyT0tw1CpJDmBww\neL/N/dYxk4lLLrv8J6X6f8SfdTbsR8DfTtN0QxCE3wDMH1/qpGn69wVB+C+BfJqm/xrt/ft8gtKb\nAb4LrP6bQCNBENK/8XPX+GjX4dxCDtmwuPfolErZxsDBkGTmlus8eNhmeXGKAB850VmctggSkWJB\np1CdRS5YNLtdSprB0bjPx5P3yJpXObdwETWTQdMSXvnhaxgW/Oh736A0Z6Ej4/kxRsWAboBYVNAk\nm1ajTWGuzGC/y9KZFfDHhGlEvbzGcBigBzGnkz4fvv0B1UUbrznCUzW+9NRVPti6y3/4t/8OaZRQ\nq83zP/3g17gy+ytEQYa1xc/QONjg+CQkICVXMNnb/h6VispJq0dGL5BL8zzsbrM4naHppMwVy6ih\nxm7rNiVtllB16Pj79CdDzkxd5eN772DULiDrPoJbYCZXxx2PeHz0Q/yBRr6sc3buLKfBNmMxYLYy\nxfH2Jq0DkenFHKXyIo8ODjCEMYY+R5p4zC/aTAID56RP0h6TLIH3cZdTJghJglmwmMmcwckMcIZD\nvnTmGT7ab4JxjKWcY/fRAyTNgmwPxffJz42Rwy/gOjHm9B1GvQHtsEnSFtGMEi8/cQMhmXDUW+TO\nnR8wWzPY31AZKy3yJYvdowamJRHGGkHLo2jqpJpOJExQ1AluX2P+7AzOqItzGJDJ5OiNA2K5hSxa\nOCMPzVZYsOfI5Bu0hjnyhZBSeRnBVgilt2jeFiksXkCLu2wHLZatFyhWbP7eV377J2pY8Z8CvycI\nggps8cnUscRPQP5CNfCDFkKSRZFjFCHCjSJUMUWQP/kyN0hojdrInkx5ySKNYiRZQgxThsMhzc4R\nSslgJrWZRBHjsc7nnn8R3x2C7+L6IpmszmjY4/pnbrG/dQ9VEBk4YxQlxc6VQO/T6exy9uot2s0G\nl1bLdEanhJpGRi1i6hbf+t3XuPH5y2Qsm6dfuko2W6TtHDHpbPD+/RPioM69N95l7emncNwWl8xf\nRnJOcdJFcqrC3HqN3/nhf89Ht1sodZt8IeVkeIqZtVGGZ/jrP/NlGqOrnKQ6rX/1VT66+x6urfDM\n01eRTkY8bByRSCk5o0a/K2LnqujCCYvZJ2llEtp7Dxkd7SIHEgur6/SPR2wf3UcvC8xlCzRbPr5s\nUKiXSAKDyC1zbqVOt/9VJAoUS3VksUAavEu+ViIoiGTss4yflahHKuONd/GKIkN/xIx9hpHg8NrR\nbVaUFTx7nrffeJWbN26w5X2Ejk8mqnHijTHGH5NTazRbKWO/S9ibpR8dUdM93rzTxNJnWavLaCh4\n4zpWfg9Fm6LXOsDUNNKehJxJkWoJra2EQBhRrZkcHE0ozUoIYYgXxGCLaMhoGZ/2oYZdlBCxiL2Q\ngdKiUn6aG6sKrdEYQ5lFjLdRzFu49S7OaYdTK0JwFrGmFNxD/1NN8GcFsN4Gbv4pl/6dyV+pALIs\nEYsxcRwhyyLBcEKUBREZJTUQRBEvTinkc8TDNqFSQ1HTT2rDVJF+r4XlmoQzBjBhqbiGmqbIikks\nhuw0t9AlDalSQgwESuenuf32fWbPzdHc3mQc9UiPJM7fegF5MsQ0BBzZwjBNpgwLh5RxIvOX/pO/\nwX7jQ8p5Hbc3ZufkXWYyT3Lv6COKhVOuXlzho8cfoNfy3D89Ybk8xR9vbdE+eo3vfxXmPzNNWoVx\nIaZgmCT5lJL2BGIwYbO5z/c33kfMltC8lCcuzPCoE+H3JRQ/4vWPhtizIrI0zdnZGt95/XWeuPIz\nfLTxI/ToIXKgIacKQSbPJB1w5frneVD+ETmjyv2PtimrFn6vQTG7iJHzmXavsjM6IlYPkKx11upP\ncnrQI1ZbeHpKNEnRtClCz+VB41XKSoZrt75Cd/CY04Mxjc5HeEGFgq6zET8kF+hculYmah9R0lcp\nZ2y2xydkTp/FyIKfePS3TKKKQqUgkTVVZHL02i1G7im7eyGSnsUbH9IZN6A/QNRDlDhCMYoUqllO\njj2MvIMa6ySRR2kuS1kqElFGNceY2SHZgcYkGDG9VEA0NYLTiFxRZM/pkdx+jeO1mNny52kO96kX\nq7zx7vsoskQiNchHn2VqVab78IAdf/cnM8v/FxGGHnIKmqIQJwm6peGHIKYukiygZC0Uy2DSHLB4\nIcRzUvRsDoUE3bKR9Cy5KEvVLqFHIgfbA5760s/iDyaoVopmaFTtBU4b7xP6AbIR4bopK9eLNNs7\nxGFCfmaW+pLGcPQeTtIjr80jJxF7O31OxzJ+eR9DyNM96fDSZ/8yW63X8dIJ+UJALH/A0lKWXDEH\nxZR8aPDtr32T/+jn/xbP3vwC9caIN09NxtpDVPeUyDvPV545R0PpMz4OKFh1Hjdvc3PhMnfvHpDT\nH1AvXqcjhojCgGphgVf+6HWWbljUSld5dP8ud4SYsj3PD1/9OpKgcXd/hG10mTp7kaW5JTqHG+zu\nvU6rc8BJM4eU1NgZbjI1W+Xo8QPqU1UeDb6OE0Scqz1LM9rnsN0jDRJOnNssVm4gKwpHRx2WzmZ4\nVjpHSZtmf/9j7j1ymL88YEpdJuw7fLB7yHPXLnNv732Wlp/neHib1eIcFKsYymPMoM1f+dIc3/62\nRDpvczI6IiMss3c8AiMiFxnsnTgISKjhEcFYIhJSCorAoCNSqIicNnqctPqoko0Xu6ysL1NesWFk\ncP/hO1wpnCVtu/hxGT+y8U+OEFWZL/yFl3hgv83e3hFltQquhzcY0uq8hTETc+g4zM9d5rTRQ9Kf\nYXq6TGf/fUZ+jOpkPlWzPz3khCDghyKqFINVgdRnPBmRLygkKWgCqJpMdxSztb1HRTUIvAmZYoEo\nmlCNsnzl5gtImsnjYZd/+MufI4xj7okjDuSEINHQlRBZSwh9l85kl4y6gJo9YDUTsDy3hjPep9sL\nyJtrBN4GveE+Ow9iyuVZ6ks6G5sW7a0R4pks3/r+P0Vxp+n5J5TmBcqFCXpSIqeXeLD3mPP1yxhX\nqoxEl+ON3+Wbh68xDiZMnDq5zBIrFZOgLpEPptGLXVbWMqj2kyT7Bzz/hZd55d6vY9sTgrFAfXaF\nD9/axy4JKJLFu2+9gSBGVKaq7HS3SAYaqRIg6AGenCGn1+g+uk/LP8QsFZE1C7mqMD81g5vWOTi9\njbk05mhzn4wzizs5wV85Zf94h6x2gqBV0bwKrudQz2Q4u3KRTvOAk/4Ow9EI6aiNtpphtDWFXhnh\nySpBGHJ/+y5eaNA8eR9ZS5kKsnx/821WZ6eYeXKPb/wo4MAxSNUca5XrHIw3ubL6GbbuPqAv6eRE\nl67QJ+3ZmNmUaJTFScakHpzcVQnNFEPyydgRomdjGWPSkcLu8IRJq0t/9y52QSYd9BHmhjyz8hzC\nQKHR7iOaCvMrIXJbZqCcZaK+xiAy6e72Qc0ga9tIkUQtcGg1v8NkkKWaV0lmzgN/OtDop1YbRpKQ\ny6hoShYFEd+fYKo6vgembRAnIYWCzuFxj6GXkIgacZQgkDLpDQnjiOPhhMFgwGyu+kmnkVTgRlfh\n57WzXFCKqKrA0NlmeLKHEe9y7H4NUQh568EmQQoPWw0cZ8gH997n/ist9j6exjYXcdI2G4N9zl0+\ny9IX80gDB1Msc3S/Rz2fQe5m2XhLx6jMMxjo5Kws3vFjNMtmc9Tm97ZeYaE6y888+Xe5tPYFkknI\no0OBs1PP4DXGTEKHqeJNWvt7+LMiH3z4GjPlv0agnuP60y/TcDawCgLrl58hq5eoz6jYswWOx7so\nkkr5vEIkWiyunUERLcajxzSHPYwZlctLz3OlvI6tRozGXUajbaQtne13ZG4+PcPCao6Fm19kcHrC\nWXOFxJ/gDw8YHbeZHLrc/6jBdvMHSGYM/cuYsUDx7DpnZy+zvHqB0/aQ4+GYWNIR4xUSwyNr5Rj2\nS/zLN7/GaXufrrDDx2/C49NNDDmLMNnm45NX2bl3wvb+x2w/OGSxVmLkqWijGVYqNYyx9gnaPYhI\nVBm1Nua567NcfvkFzs+tkdNEDm/3ePc7H9N965h8Ps8gVeg4E56on0dXXHb9O+gzRYqZRUadNs5B\nnlZ8xEB5DyHNohdlbL2CgoHrj5DSEyoXb9Eet0iUNp9f+yKW9ucQwCrIGrIyISEgDFTiRCAKAwxd\nRpY0NMUga0YgC7hjiG2XKPCRxISQBM/32BjtUFle5vygA1NFhDAhElOkcMwFMcdZY4UHw6skl0d4\nzlXs8jf44L2P8ZwMF6eXub/3KhvvKFjaMpWbAkulhFdfOeaJz57BifZ4vHPE+bl1nv6bv8jjrXvU\nLyi0PrhD5twMz/78WfxkTLN9QtTR8OsLLM1opFjsfbBHTlrhjR/+Ls2DLZ6+8STZgsIfvvoHZLwc\nkmLycP8j1p+4Suoa3N7/Q+b2q3x4/4fY7vOU4nM0hQfk5SFjN8A2TCYTn/nZFfYGEQktnr+ZR6vP\nUirUUAsPWa2t0e59yDe/85vUz17BWJqiLK8zN2qzcekuys4qzckyzf6b3H9tk4s3C8TLh5ypvEh1\nap7XX/89hlGIYA/wT2QmTkqm4JOtL3OwdciDh/c5P/cEJCnXllZ5zzshVzQ43g4Q5EVeulrnXTPG\nSUYYksE7bx8wvTCLl+yxt3+EFmYQbR85F1E/l+fe+29TmZ3hJOxw3BVIpQzjwYRUkAjGKRXTYths\nIbcO6YpTHO61WDo/RZiOGW0YKLmUrOFy3Djhwxhs2aK0ep1H7v/I4GCZkbBPoQyTqExCi/6gSq4j\nU5hep5oVyenPsTHcYXz3dxACgVz+CX547w7udP1TNftTM0sSCNiyiaro9JwJkpQQ+SlJIiCpMgI+\noqBQsDIkQUwqQeD4xKlEnIIwGZGdWqEgA6FIcPoINT+PFAO9HqkdIigyv/7lX+Be94CvRm/Rb5WZ\nrXo8/8x/zGn3IaVkgblnu6yZGY6dEo8fvc3aUzO0GifoGYuzazNMJj5v3n6TaVFEtkXWPv8MehJg\nmWWKyiqVwiwdu8WED2k6d1gt/QLnrv0F7r12SD6ncWb5l9g9vs22e8Bi5QL54oQPjg8YHXVZtJ5i\nZ/81ErHGq/uv8/nPvkjkTnh8dBc3tnjU3aQytcRM9QrR7gaC7lO5eJ+z1q+w2/wGs9Iiu96PyHbW\niIwdbP0CW8KPyIzvkMpdPniY551YRDUWyC60eLDxAFVMOP/ZNQKpTRq7nB4/4N7Ra2Ttc8QolPWz\n9MM7rE4rfNDcx0oqFHSZex2VnJjneOdtQqOMP4hI5BG2ZxPpJp6ioaoqV57M8PD7Ltee+AyD/j79\ngUwkgpd6yGKMafnsRTZXb93i7oP3UPoC3V6IYTVRsMnPnScnQmf8mG43JE4cilMSRmrSH01whzrV\nepnQm3A8CglNiX3XJTOMuDVXwgx/lr3om1SyeQZeBN0B1ZkM0vSE1sjEHe+hSBJvtN8g689w2w0p\nlpYIlFepzF2j05A+VbM/NbMcdPpMl7JECciyQOSrRKJHTIwchyRBRBp+UhgQCjGBHxGLCaIsI4gK\nQrZKoaTi7ndJBZDy04TtbVR7msSzEOSQJAiQ1SwX83Uymc/z+wcu7nzEyf13cLUCzjAhGM/waNr9\n39s7t9g4rvMAf2cuO3u/L28iKV4kUpRoW7IiX1TVkq3EjoM2TlK0KFC0QB/60gItWiB106e+tehL\ngDwXBdKgtV/SW5ogTeG6iYLYThRRpiSaonjVkstd7pV7m92ZnTl94ApRFCtRYpubAPsBAx4eEvt/\n2HP+nTkzO/+wkX+Nce8wplLHH5A0my6KAqZiUS8WKPhiTIxeYmw0APUOW7dv4vUK4sNJ2q006ZyN\n12PRbP0XTuM8nXCapmqQb+5Rz9jEZ4dJxGYwzWU8eBiseHiv9hYeGSMaGkGEajTVCtmqQWTgDLp0\nUFolxH6H1eICjhgiZR6hsAatuRoJ+TiFXJOTyTNII0GpHOPm1ndJWNPIfAyptnFEET1gEx228elT\nxGUDLezQKrXYud5h6JknuXlzmWNzKTLbFagXkKNjjIemubq5iRMrs5Nep9HeJTka4vv5RU7Mn2Wz\nuMx+xktooszRJ2YIdrxkb66RPFNi5XsWlYZGfWcNN2Bh5yT+WIrSXhuf1SA5cJbHhhvcWLiDoiaR\n7j6RkRbDwwNk0xYeX4mGezDGesKLiCQQDQ9Pn5/j6ttL4A2gGFW0jorr8yBcH4oIEA7ZrO+9zU5u\nHcfnp1xt0CgqIA2E1iRiRYl5YxRLq2SKEq8RQQ+0kR2DCdfLWs7DjlFA93YeOmd7tmZRFYOAXyBk\nB9s+uBlLaDq6poEUCH8AwxBIQd+qEwAACm1JREFU0yERimE7Gh7Ni2V3UHSdUDyIk2nQQqJoAUS1\ngOqo2O0iqCaYNVTLRrbryI7NUYK8nLyInZtl01yi3Skyc2KWaiGHIlzOjFzGSvpxEIzGjpCKpFi/\ns049U2Ju5FnGJ+ZRjTqZbIamT1I0cyztr/PGm29w+26B6clZIqNnsSuDDBkJkmMfQwtEsNQagek2\nqseklNtme6+E5vUiJ8aZnZwjNKLS8WRpRzQyb+dp76epdNKU17+LRwuwu1clMnyOofEpcs02UzPH\nWcj+K4vWN+l4dNbaebYydynWd7EMSVAzaVVWyKybDMXmiKrzzB79BG7Rw5HAM0RaY3T2TCIJjdZ6\nmdTRUSKDlwgOORw7lsRU6uznFwlXwtS3dTS3TcOUJH1JEsFJttaz6EMOyYkg9bYHq+PSjn6fPSWN\nKM0zoIwyKFL82rOfRTei+NwhqtkC9c0iMyfnWF9c5NbaLarpIqkjE5hRk9DwNKWOiR4MoooU7dVd\nogMxmjtF6isGG8sb7OR3GDo5wOyTs2j+UWqOgqtr+KIOVqtCXXepkUGEJLJVp13zIHSbRlPgLx1j\nLxNAs2x0T5ATg5MYtsWencYIuWx0ysQSoxR2bbIN+6FztmfJIlsSw6fRETZOxwZHIeb14vXoKKqB\n4VGRqkI4nsBjOBheDUU3cFom/oCH3eVV3FoV1RfBCIbRNAM3naHTbCMVAaoHF5CORDga0m7zuCeK\npniotlT2sjv44wbNgoOCRU4aJGQC736VrXQGRauhq2F212tce+cKyYFR/D4Hrz/P1e+8TsFfwbAl\n/lSUYECSzW4y4CYZP/4Ctf194vsGo5HjzB59ksHwCVwRIXvnh5g7FeR+nWbaZOXuVSqVLex9yeDA\nOHduZ4mE/UTqKpODs+xcu4YMZWkWbuBVStjZ92jVdok7Z7Bxqe3fxSyUsRoWmXyFQe0cxmAKZfAY\n0UCSrbu3yTd32NtYIV1ZIb3RZquxBoEATc1hq1Ll0tN/gLBWOTGtEgyeJOjxsJhVufKDGwxrCbR4\nmHbRoZzNUm4USE02qWebPDt7gWRylPTqDUTNw9j8PNXaOrGxKfzDE3i9Izw7PQtBg+nHT/DKn/wW\nlbqHaiHKyMAQp8+fprj1Lv6On1SqhqPV8EiFUKeFNaKzl61TaAkawqKQdkmOSlJKEKchCfoDDKba\ntBt3UdUEwaDBialztMwgmUWJVY4j3Q6Nss6RaR/tliAeHMFCxazm2SrmaPsTNIsCnxqkbFco5Hfo\ntJoU87WHztmeHYYFggLD1bBUDVwTnxGm2trD0FJ4fCqq1JBS4AsaWM02ml8SiPlp1duIRgfdq5FN\nRYlaNrbbQVcN1GAEu9lCWmU6ShhV0xCqA50WiitxdY3P+C6yXP0/9LEjNJqbGK5BNDGPuPUWjYiP\ntsfHWOwYdzaXiETinDqdZHrmEv/2D19ChEP4jBa+SIiE0LA0l4QMkLEdhkSShbUVEmtlUnOCpcVV\nng69wK1vX8GIeKlZktBjE8xHptnMb1FcXCQYG0J6vFTsEm1pkxwc5EhwmJ1KkYpjEzo1i9UwiQxP\nktu/jdkusZZrUay7TKWexx9p0DFTbOxmcGoa5fAWZqGKz+fnmdMvktgeIJu7ysq1VXzHbBxlicr1\nQQIBP5QsjPEOoUiE9Moyw9tDdFrvYgRipAaD1NdaGPEJKqU04dAgahTa9QJ3awKlZvDW7uuI+hH8\naojwzDTby+sYms6Jo59AyV/h1sLXqXkzCEVjbGCIndwC/qltToU+S7b2HvXdZeywy8RsiaUfBjl3\n4QXMeBPTXSNuOKhUGTp6DDcv8dfqVJsQmIpyaXKG7731DRKek+Q9d0iFBsmX01x97w1Uv0JiTsWu\nuLSaDqobo90wqLducDTyBPsZC9NRMII69fImCcOHaYZI+lS8fo0b726Tmk09dM72rrqL16DRaVIz\nwZGSQExFVby4HodAIIiqeDA8AaLhEA42qiLwCotCzkSLxpFtlbawoNPGbJlIXUP6dLyuQG1baC5I\nBEJVwLKQbgeh6ZwejrNdL9Iu7RKyA+jBGGqnwNjzH2fg+MdIzZ/AipYYnx1H16sEYwFiaodzL14g\nMT1MJJUk6EshzSZttURi/gRHx+cJDJ8kYidoxHf4wbevMRAJc+XONwkdH+T4Y/NEh0dwNY1KuUDI\nP8iFy+eZuXiRsalRzj/xFLa2SDabZurUMLanhi/YwpJphH+J/P4K9eY+Zy99jtDIGQxp8+d//Ht0\nSgbe1DyOrXHy1CmShskTySmitkHT2sL1Z5g69Rynz11G070Q8/LYyxEuvuTDNSS+wTb7hatUHT9K\nZJLjZ54lHgtSVnfBdXnq+csE2jaRYIIwM9hmgon4k5w5PYSyE0Bvu5x66Tlu/e+bHIu/TEx3ubL0\nJaoJi9nzTxOeTHPydxbYUP6b+FOrRKIqtQqcP/5H/PZv/g2vfG6es9rvMyA8bCxd5+TxSULGEBdm\nfpeocpSx+BQivst2Lk8x0yK3u8C/fPU/KNa8LGZuM+yHanOTSiHO/h0dvean6YAwHbxKjNCAw16m\nRv2uQb3lotVthBnDaPnRmj5y6Q6N3BqxhJdqy0tiIEl9o/XQOduzJ38detA+fX4Ofmkek9enz68i\nvbuC36fPrxj9ZOnT5xE59GQRQnxSCLEshLgjhHj1EOL9oxAiJ4S4cV9fXAjxP0KIFSHEt7qlnu79\n7Qtdt2UhxIsfoseYEOJNIcQtIcRNIcSf9tDFK4R4RwhxXQixJIT421653Pf6qhBiQQjxtV67PBQp\n5aFtHNwwtgpMADpwHZj7iGP+OnAGuHFf398Df9ltvwr8Xbd9suukdx1XAeVD8hgCTnfbQeA2MNcL\nl+7r+7s/NeBt4EKvXLox/gL4Z+A/ezVGP2s77D3LU8CqlHJTSmkDrwOvfJQBpZRXgPID3Z8Gvtxt\nfxn4TLf9CvCalNKWUm5yMBBPfUgeWSnl9W67DrzHwW3Xh+7SdbhXAdvDwYdYuVcuQohR4FMcFHK8\ndxaqJy4/jcNOliNA+r7f37dM0iHwgco4fVCEEBMc7O3e6ZWLEEIRQlzvxnxTSnmrVy7AF4HPA/c/\n76GnY/R+HHay/NKdp5YH+/afq4zTB0EIEQS+CvyZlPLHvltxmC5SSldKeZqD6jvPCSGe74WLEOI3\ngD0p5QI/2qs86HqoY/QwDjtZHiyTNMaPf0ocFveqaPKLlHH6RRFC6BwkyleklP/eS5d7SCn3ga8D\nZ3vkch74tBBiA3gNeEEI8ZUeufx0DmNhdN8iTuOgOswEB8fKH/kCvxt3gp9c4L/abf8VP7l49HBQ\niXON7oXbD8FBAP8EfPGB/l64JDkoXQXgA74DXO6FywNeF4Gv9ep9+Zl+hxHkgTfkZQ7OBK0CXziE\neK8BGcDiYL30h0Ccg3pmK8C37k2c7v//dddtGXjpQ/S4wMEx+XVgobt9skcujwHXui6LwOe7/Yfu\n8oDXRX50NqynLu+39b/u0qfPI9K/gt+nzyPST5Y+fR6RfrL06fOI9JOlT59HpJ8sffo8Iv1k6dPn\nEeknS58+j0g/Wfr0eUT+H9bjcOnSi+aPAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# draw box of the ref using 'green'\n",
+ "plt.figure()\n",
+ "refer.showRef(ref, seg_box='box')\n",
+ "# draw box of the ann using 'red'\n",
+ "ax = plt.gca()\n",
+ "bbox = ann['bbox']\n",
+ "box_plot = Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], fill=False, edgecolor='red', linewidth=2)\n",
+ "ax.add_patch(box_plot)\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 51,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "IoU=[0.09], wrong comprehension!\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Is the ann actually our ref?\n",
+ "# i.e., IoU >= 0.5?\n",
+ "ref_box = refer.refToAnn[ref_id]['bbox']\n",
+ "ann_box = ann['bbox']\n",
+ "IoU = computeIoU(ref_box, ann_box)\n",
+ "if IoU >= 0.5:\n",
+ " print 'IoU=[%.2f], correct comprehension!' % IoU\n",
+ "else:\n",
+ " print 'IoU=[%.2f], wrong comprehension!' % IoU"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "collapsed": true
+ },
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 2",
+ "language": "python",
+ "name": "python2"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 2
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython2",
+ "version": "2.7.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/refer/pyReferDemo.ipynb b/refer/pyReferDemo.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..5e0acca58e58539979beeb256048f09d0a092145
--- /dev/null
+++ b/refer/pyReferDemo.ipynb
@@ -0,0 +1,229 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
+ "source": [
+ "%matplotlib inline\n",
+ "from refer import REFER\n",
+ "import numpy as np\n",
+ "import skimage.io as io\n",
+ "import matplotlib.pyplot as plt"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Load Refer Dataset"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {
+ "collapsed": false,
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "loading dataset refcoco into memory...\n",
+ "creating index...\n",
+ "index created.\n",
+ "DONE (t=9.88s)\n"
+ ]
+ }
+ ],
+ "source": [
+ "data_root = './data' # contains refclef, refcoco, refcoco+, refcocog and images\n",
+ "dataset = 'refcoco'\n",
+ "splitBy = 'unc'\n",
+ "refer = REFER(data_root, dataset, splitBy)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Stats about the Dataset"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "dataset [refcoco_unc] contains: \n",
+ "142210 expressions for 50000 refs in 19994 images.\n",
+ "\n",
+ "Among them:\n",
+ "42404 refs are in split [train].\n",
+ "3811 refs are in split [val].\n",
+ "3785 refs are in split [test].\n"
+ ]
+ }
+ ],
+ "source": [
+ "# print stats about the given dataset\n",
+ "print 'dataset [%s_%s] contains: ' % (dataset, splitBy)\n",
+ "ref_ids = refer.getRefIds()\n",
+ "image_ids = refer.getImgIds()\n",
+ "print '%s expressions for %s refs in %s images.' % (len(refer.Sents), len(ref_ids), len(image_ids))\n",
+ "\n",
+ "print '\\nAmong them:'\n",
+ "if dataset == 'refclef':\n",
+ " if splitBy == 'unc':\n",
+ " splits = ['train', 'val', 'testA', 'testB', 'testC']\n",
+ " else:\n",
+ " splits = ['train', 'val', 'test']\n",
+ "elif dataset == 'refcoco':\n",
+ " splits = ['train', 'val', 'test']\n",
+ "elif dataset == 'refcoco+':\n",
+ " splits = ['train', 'val', 'test']\n",
+ "elif dataset == 'refcocog':\n",
+ " splits = ['train', 'val'] # we don't have test split for refcocog right now.\n",
+ " \n",
+ "for split in splits:\n",
+ " ref_ids = refer.getRefIds(split=split)\n",
+ " print '%s refs are in split [%s].' % (len(ref_ids), split)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Show Refered Object and its Expressions"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "ref_id [22758] (ann_id [540661])\n",
+ "1. woman in front\n",
+ "2. lady smiling\n",
+ "3. woman\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAMsAAAEACAYAAAAdo4LwAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzsvXewbXlW3/f5hZ1OPje/3K/7dZgOk7oHhhkYZhAiFMmW\nymCEURmDbRkJlWWkYowBSwjJhUpg2QiXXVgYHDRgE0QQaQYY0gwTu6fz6/D65XffDSfv/Av+Y597\n3309TTNVZuiR662qW/ecvff57X3O+X5/a63vWr99hPeeO3bH7tifb/KNvoA7dsf+XbE7ZLljd+xz\ntDtkuWN37HO0O2S5Y3fsc7Q7ZLljd+xztDtkuWN37HO0zwtZhBBfI4R4XgjxohDi+z4f57hjd+wv\n28RfdJ1FCKGA88BXAteATwDf6r1/7i/0RHfsjv0l2+fDs3wR8JL3/qL3vgZ+Dvimz8N57tgd+0u1\nzwdZTgBXjjy/utx2x+7Yv9P2+SDLnf6ZO/b/S9OfhzGvAaeOPD9F410OTQhxh1B37AvWvPfitbZ/\nPsjySeBeIcRdwHXgW4BvffVBv/g//jCDXo/Qe0QyZKMbsUgzRmHAo+fO4seXGO3dQBrBfu5Z2TjO\neLSHxrN14jh7o33OPPAwj5+/yANvugfdP45KAmJCimJMnc1wPqDV7+K9x3vFb/3aL/Ged72LOIxp\ndTrkVck//ef/Ez/49/4zClMhg4B0ssDh2buxTRI4vHHs7Mzo9QbIIKC3MiDqdPmRf/GT/MHv/T41\nEiE1CLU8TzMPCCEOHx/dBuCluG37wf/x/j6DlRWEEIfHSicAh3QGV1bcnYT0Qo9RAZfzmqkOcbpF\naCwIC1LgBTgBUt4KHJzzSCHxGITweC9QqMPz2yU8pJTgPXvbN1nd2myu+8i1N+Y5GkA0ms5rv+db\nx4jD/wfHHf1/1A73IQDJ7vVrrB07DniCQIF1KATC+ds+K6OXn5nS4GqUCpHO4xdT3tXv41otPjmb\n4eMOOghwR84tpENK+KPf/OCroXpof+Fk8d4bIcTfAX4bUMC/ei0l7Bv+o2/j4rPPs7Hex6kIl+Zs\ndtfRezs898IlutIynbWp5zdJnWDX7+EXOYm2pJeus7axQWk8UkI76ZHohLzKcKok9IJ2lFA7jy9K\nrDXEYYtu1EF4Sb7I8M5jnQRjMWmJy3NqW5MkmjDuEa0OyIo566c3WV+3uEChpCAQEq3g/d/5Lfz9\nv/WdfO8//Cc8/9wF0BLvPM4D3jVfmmj+jGu+UOcdHg6/JCEF3roGjIgjoGyA4gUYKQg96Lpio5Ww\nJqEUgkuZJVUhwmsEBqscCvB4GnjfAl1d1yipl5tvAdQdOZ8SAnzzao4AeznQ0e8XKcXhOHAL9EeJ\ncpQ4QhyQtjkHt3FDgBfL6wbvPELe+kTcqyYa5xxKCrxf5hDeg5QIQBuPlwLtHUYI8A7rJe0Qhlrz\ndJpidIsQifEWLQOEWH5mUmKdfV1sfz48C9773wR+8/WOya5fZefCS/TMKUztsd4xlIY1ak4+dA/F\nNKM/zFGDh1ntrCK2Nth/4nmuXnyCsrZMFyn5znWMKfnox/+UaVriixQZSrwI6ScBVniiIGTYHtIf\nrjJcW6O3uoE0hhoBdQF4itKRTVK6LUe5v2DwyN3M9i8RK0c2muGERiGYz+YUacrKYEg5mpDbm7zv\nix/kJ/7JD/BffO/38/LlbZyKCYXDGIOztgGelngvwMsGG3gQAg8oKZBuCTA4MqMCXiCNwVQLTrcj\njivPXEquFI6ZVvhAIvBI4RFSYo6McYBH7z1KNTRCeAQSvEYIjxDu8BiBOqTZwesPMf0qb3Cw59Ue\n4dbh4rZ9AnXL0wiB9+7WjL70brfO5Ztp45Dz/vYxvcUJgRAQSIlzDuMsXoJEIGk+B4tDAh7Behgi\nlGCUGQg7WOlwQmBxy/MIvFDLSeDPts8LWT4Xq2rB6tZpom4HXRYkcYtiPGdtY0g2H9OSGqfB147R\n1WusDLuYOqMlDYoKM1lAteDc5km2d3c4ttKn31un9JKtBx/jyaee5C1vfQBZ1hTWUEznyCrjY3/4\nQWb7Y4TWBKHGlClXr18iRFBawXg0pveAZTyfc+7cfaRlias9Ck1noEmSNnVZEnfa4D0rG+skpuBn\n/8UPc3VvwXd9zz9gUUq0DjE6xDoBvrwFNkAtweC9v/VleU+StBpoeINazraBsZzoJgyFofYFl0tJ\nHrRxQqAEODxyCSyx9AyHIdwyDHOuAefB81sh0u2gllIeeoik0/6s8PFoCHXw/LXqdEePP7pfSYX1\ntiHwcqzm2uRtr3XOHV6LkoKk3UW4xiPJI2Ma65BK4YU/DNnwEo9HyJBCe9qznBODLnOhSfHoQCKk\nxOGa0FQ1lPIC3J+D2TeMLM+98jI3L99g5Z2PYZSmrmp6nR7ZvKIqK1qxZjyf01UKX6S89MTHiUxA\nHMYMekN8XaOVIp1MCL0nEBZhPbGQmMWUYnKdIL8LmSSUuyN6seSeE+tgauzpLUprQAS87xv/A6r5\nDuOXzmNNzSMPPgw+5ezpexAqoKgWTIuCci/lpVdeZKszZGc25h1ve4hLL1/k3KlThEmb0XjCPadO\n8OF/8wE+9CeP830/8IOouEsQxdRGYYy5NTtahxMgPIgjs3SctHHCNrN+XUNtOZN0GIoaK2NeSg2z\nQOJkjBa2IYtoyID3SKU+C8wHRDkK9FsAPpp33A7wdrd72/YD+7PysINwT0r5WZ7lwCNZtwxPj+Rs\nQsjbrtm5W5AVDjCOYbePd03cZXEImrGEkFhvEWqZL1qP8QYhFMIaQNCqavpBzPOLlFIHhMIg0Utv\n23wHXggcHvH6juWNI8ub3/Iw0VvfQrS5DjrEC4mfLbj04iu8/OwFNldXybMFwd4IqUP8oMf2jT1O\nbLRRMiSOu7SSFnWZ0RUh0zylrBVrp09jhCIJIVvMaKEJdAuvDFEc4J2nXGTE3lFKj8nGTC9fhWqK\nUCFX9y7SKXeoM02oNIGGgXXUvYREt7jvoQd4QBZoHfKudz6G0RKpE7YGAyajfRbZFO3m/LN/+kN8\n+I8/zq/91oeI2j1UoDB1hVLNl6OcP8xhEB4hVTPTuWbm9abiRCeirWq8D7m4KNhNYqRwaGloAjDH\nAbZePesDGFOhVHBLWPAe72/N2kfBcRT8Rz1MM9/KQ8Afgvog5Dv4W77u4P8BSZejL8dmmSMc8W7+\n1v6jyf6t3MjjcHg8UumGQMtJxh6+52UOJQXOS4R3TUjmYC3QyCjh0niCSGL8MvwVzjcJJRIpPBZP\nIF6/kvKGkUWJmrrKSV/cI253KGuHjlucvucugjBgEAdI5WiZmu3phJPn7iO7N0dp28yiUYypKz7z\nB3/I6eN3cWmyy2J/j+KJz3Dq3ruZvXKRD/36HxImK3z9t3wzk/3r1HVFkWX0ojb9VszaWoIPFM+8\neIljLcvqIKZMc9phF5MY0nKOqQTWOjY2TyI6MaNsypo2uNBh7YJ0URC1++zOZrTCCOENe6Pr/NWv\n+Qbe/o638t/80Pv58If/mH/0wz8CSiMQGOFxgcJ7iViGUN57kBaMRpUlx+OEE75C6ISXFgXTWBPa\nAsIYIdwyZJAIwW3A/GxPYDmaUR+A0FrzKg9gl2A/APJRMvnbkvCj5zkglVp6tYNtt3kWLM7d/tpX\nh4SvJsqBICLULUHAOYdHIJc5y4EXkkul1+ERshEJvJQIU3NssMKoNNRKo7xqFMCDvMsdjCtQCjhy\nja9lbxhZwiAhzeboIMQ7gzQ1l89f5oFHH2NlfQCLBVWxIPSeVihY7F9D1zXjGzdpDfvEnQ51XSKd\nZn3Y5q67v4xWu8N0tMelK1e4vLCMpobt51/gv/y+R5Dn7sfKCLcs8TzxxOP02wHbF68SJj3SyHBz\n7yZUNZfHU0xdonRCuih46KG38vLVK/gy4+yJM5TzfZwpqE1NkiQgBGu9AVY0IJtPR0QakAYWu7z3\nbffx9v/zp/k/fuFX+cAv/CJh1KbwvvEwUkJtCFQD3Loq2Ao1W0rRCjUvpQU3tULpkFYlyKVswCIb\n0cA5RxAEWGs/iyiHocbSIxwA1Fp7uO/Pkrpvz0U8DVRuB/yrX/vq0Ovo41fnPQcEPzrGZ+dAjRhw\nlIzOvb4MDY3n9tbSL2vaYcSnRjsQRMvjl/sPBYeGXNLAa1dXbtkbRpa92ZwojJiOx8SdhEFvlXtb\nm+SLku3rl9nsdGhHEaZ2qEBS1xVVVTLY2CCbL7BUFGlGr93Fe0sSaK5f2ebl808x3t0HH4MMyOsa\na2ssFhVblJbkpub+Nz+EHl+kE3iGDz0A3iB9Qa0Nw+P3ce2ZJ+nHEShPWZYcP/N2rP0Mn/jUU2RF\nRpnuUrmKSMaoIOLYxibWWNZOniAOBkS6w3w0YnV9lcoa1kP41m/8K3zDV7yTH/rvfoIb+7sUOLTw\nSG9oKxh2Y2pZs2ocSdzi+TzjaiCJfYJAUkaNVxCHYf0yF7D2SGh1RIVahhzOe+Rt4c2tcOPo46NA\n/ayZ/8gxr5XAv1YOc/DfWnv4+M9S0Lz3jSys1G3X8WoJuvGUt++77T0IELYJN7fimFBL9oRDSHV4\n3KvJLYRoJP4/p/nkDSNLHLb52Cc+xWNf/E4WRUk9rZkvZtBWlF5ycTQm0QoQ1MbgvGOxP6ETWTqt\nFfame7RXBhALuhvHePnidcb7c4o6YDSvqCvDPJ0QRposX5C0FdIEjRvXmjiJKK7NcM5Qa8uAiiBp\nMc7nmMUu7XbEYj6GekI7FPhLU051BXkoCJJNkmCDzpm7+cD/9Tt88zd8FWG3gxEBUrV5j/F86hOf\n4K67TvPyC89zbbyD8oI4CGlFCe//nu/gyRev8Bu//muc2hyy1lYErmB7NGext6CfJLxSZlzx0FYa\nLxxeeaQXKOERCliCxh+orM5SyxYej3IGQQ0+WIZS4lCFFUIiaYDhljmBByTqSGgjkFJxEKc0AoK4\nJSELj7XLZF4KvDtaI3pVkv5qYWHpVZRqpGrkEflaiqWSdeu1B+acw1IhNVi3fE+eVxFLI52lCDXt\nvOR4P2EvLzFhSMsLnATjPXiHkLd7t1eP9Vr2hpHFLHY5e3wNny84trZF4BRsaSrvyTtztARlDU5Z\nnPdkeUFW5Jx+9EHarRVEHKIFbOUFL33mGZ55/DyLNCXLM5QX3Lh+hVA3M1GWpQRhC29zgl6HRAV4\nBXOvSXSEM4JxMacXdqjrGO/bVLOniPKMQTdkMd9nUWqCuEeCpp5OyCnZ3tun3ZeY2Qi7mFGHIbNx\nTj7Z4+H7zhJoyUq0wulHHuDC08+gpULiaceav/a17+Vv/vtfzY/8w/+WOBLMpxPe9ObT5Gfu5dMf\n+zTJoM/KvCBDN3Kpamo0zotDj+FFU7ST3uOEa+oXfkkCr1gW1m/VTpZgNd4hhWyeL4uL/ghAD3zJ\nZ+cRzTHO3q6wWefQS49w1HscEOToNuccWutXCQC326u9xVEpXEow9rOr/s0VWxAK7R19PJ24wyeu\nXMWsdPBHuwxe8zz+tuLra9kb6Fk8N65c4OyZ+1AGfCypBMQyQPoOoqoIVMBssouXgrWVAaNRD8Zj\nqkyQdHqUdY7WAa2gQztJSFoBZdVmvDtiMhsThAkqKPng736Qd3/pF7O6sorSHVCSxXzBysZxTF3y\n+7/9YU70Q3Yef4bU5Az7x9i5foEzp4+x4RKU3uD0sR6lqQm0Zm9nRru7QuQ9vh5jhaSYp2T5iNVT\nJ+nEpyjSBcoHlHnOwHsiFTLs9ZmPR2ysDkmrjP39a/yn3/6tXLh+gUe+6GGe+Mzj/PbvPI4f9Bi2\nV3n7McHFGyOs9IznM6y1mFqgtAYhqY0ljiNMXRIKiZA1ILG+UbDcUq5WSiEVWG8Qy3YWACnUrTAu\nCIBbAHfOHIZ2TQuMxGNvC7sOAH8QOr06JDua/AOHxx71PAd2tCZ0oKodJduBNa89qMEc5DFNXuNF\nk6wH3rApPaktGeuYUOplO8Vn152Mad6nsxb5haqGzdI5x06dpK7AmIyoNaSVxNjaIJUliCXT/V36\nrQ42kNzY3qOoaopiiko65DbDK8Hjf/IRpE+IgwinJLWpMbak1YrJK0eW5vwvP/k/87M/87+CFjgD\n4LBVzb/+qR9nZes4SQKnzxznLe96lCzPqDLP+NiA1X6XMIqY5xmTuM3LL73Emx99K8fObVLMx6xK\nxb22y/mnLvLilcu0B2062zfxKmQlSbB1yemTx6hnKXE7wVnH5vETGFeTLeYoqXH1jK3NTZ596Qof\n+eiLvOexdyO/qI3xAiFKMJJ2t0dWl+BgkVcEYYgSEh0qZos5w+EqdVlgJWzfHLE/mbM7mTGZ7rO3\nvUuazsnLkrIuCaRAeUcmFN43RNFBQJ0XeCnodDoEYYhH4h2YugIUSvuloCCXBPO3VC3X5EWNCnwL\n3M77pRDx6vaXppXHOocS8lCGbjyHvEXg5WOW+9wRlc57bvNa3jukE3gkyjk2+1126wLf7hAYgw9u\nkVmqW9K4lBLrHFKJZZvNn21vGFm6G3fTqj1JLyZL5yxG2xhrcCheuXSV++69myqUZFXFfDInEJLN\njS1qN2U6XVDmMy6/cpXACrzPsLbEWcFsnLIy3OSyvoGtCryvWBn2kEowz0p0oOl2Orzp/vvxleWV\nl1/gnrs2WekPKCZznFBgJwy6iiQWTGe7zCZTkmzI6Po+/p4Fu1cu0+51mY6n3NXp8MylHd7z2MN0\ne11msxlisEU1m7A2PMEzT36aRx57lN5ggC8tqXEYAU5KPvX4S+SLMUEgqFXAffe9DRm1EC5grT/A\n+ZL5eEaRZoRKU+clkYJAegaDLgpHOwnZ3hnT0YJeICHxdIKQk1vrdOVJfJXSjgWj8YR3fdm7WU88\n08Lx63/0Cb7p67+avZ0dbuyMePfXfwdht9NU2GvDT/7Yj2LmKfPZLpNsRppWzLKSeV6yyHMK4zFV\n1XgfqZY9WgEoibEOXKPGGWvQStA0mkq8qBHCol2IsgInXCOfKwHWUwvZhIhSNdKxONLC4wTeLZN0\nKVDWI6zFC4HwAiGh0DWdrEQFG7yysyCPMuIkROPxuIa8y64Hrw8SfkALCv8G9IZ9LubSMYHSlJOS\nfD6n3elQpwuS/gbZbIbyll4rJugLVloerzXeVOzvOBb5lJfOXyDLDWlR0g5jbFngvMPUlss7O/R7\nfaR26DObeA9KK3QSY+qaaZqBLHnpxfOcfuBerly7wtrmcdprK7R0m3reQxZzdm/eRMqIjeOn2R1N\nEEIwmsyJQ0UYBawP+lRFxfrqAI1hsn8DoSUXX3iWB8/dxXjnGmdObJJN91AqIE1zwiChqCtM7Thz\ncpP73vw+KgdVJWlHLZ47/yKzWcnCWGpreeHaZXrtTlP1t02l3jnHdD6n22qxN53R6a8wnu5j2zEq\nbCG8p51ESKORicC4jK/96q+gXOxjbMCnnnyO08dP8+nf/xO0yqilJJAKYy1BrBAaelQsijHrWpDE\nEWG3TVFbKusw3jW5kpR0ex0qD7N0wXiWMptnTJ0hmxekaYozNZoAhEcRUCmBQ+KkxwiBcArkQU4F\n1tQI3QgTwnNY97FL4UEgD+Xeww64pccoFcSm5njUoS5LnDfEUdiEnixDQDzOWfRym1v2qR0Q6PXs\nDSNLEiQ4b8CWBIEGY1hfX6Nwkmw2xTtLlVXUymKKCm09CsFkd8HNmzvYsqAXJ+S1JQpCAiHJq4JA\nayK1grWWXhHjNhxlXVBVNXldEEaKv/U3vpluoohUgAgkq+fuRpiK6fY1bOWoZYuLzzyHxNHqd3Fi\nxsax49zcfYK704JFWrIzzTBFztrqOr3N49iqxNaa/dGEtZP3kFlHq9fD2RqDJIwSYqepipKVlXUQ\nlnyxz3R3h87KGqEQLBYzrK9J2hHeS9K0pNXtECctAqUIdYC1jZJknQWp2Ng8xs3RlPXhClmVURcF\nzgn2dsY4C3EArdDy5DPn6SSSvHLcde5+puOcvFJ0kh6TRYZUHlRTxNRKEGlHhqW2JUHgkKLGVBla\nJ8RBAzIwUM5wVU0HiQ8km1urYARqq5HpnPeIfoeyqrHGc+36daqq5ub+PsZ5SufQKkAg0DogiDR1\nXTW1I+PwDZMaMi27HBqpV+BNI1AI2fQzBB7a1Zy11Q2m8wIXBI2gZx0yDMA39HLC4ZAYu8zLDljy\nhVpn+cwLL7PS75MVOUJo+r0eNxa7rAzXaHXaqCBoJEknEdag0Fx+6SLTxRxTFggnKPISA5SmxNUG\nddBuEYZ4CWGYUJYl3aSDB77i67+GVr2guPYc9WiC7fRJOn2yzJBlFXkx59Rdd2EShZWah978IOUi\nJQg0PpT8tb/+jcxmEyLVI1zbQCvNtcvXWN88jlYO6wXHJOjOBtP9PbZvXEVjqbKM9Poe3d4qUjj2\nL7zSxNtKsbdznVYUs3riDONpShKGlJnBmJpeK0EjaCUJVVk2nkU1/WC1qVGhoKoqjHP4IARTE7dC\nQh0QxzGidpiioNcJ0brF6tYmm3efJc/nnHaeojxFXRfcO9wErRFYpNA4HLuLlMwY3EFuYgwiiSms\nIIk0ZWFwxmDzmior8QgMoLRGKIcVgvl8Tq/Xo1UvkMZRljX3rg9Iohhx7iylsahOm9oZ6rqmKAr2\nMstoNGI+n1NgMEuICiGbDm0aRfBgCYOXEAqH9I67Vvv0S0WsJNumhAgEAUIH+GWvjcOjvML7RsFj\nubbHefOFW5Rc2zzO2sZxro/G3P/gfSA0QnoWuyPe87V/FYeiLEukqwlbXX7jl/8trnZUHrBQVx6J\nZl6VzEyGshasIYkiglaA8h7rBfc9cDedVoyShnS8hwgcNojQskO728Faj9ZNGLK6dYKyKEiikjhW\nDDfWmRQFLzz9FA8//DCT+U36nTaj+ZTQeBYCdvZ2uef4EGMykBFpWqLTimESQScgDNuM1Zgb4yl3\nRS1sIkiOHUfIGFOmhG1PK+hQ7Ke8+PKLvHD+JbKFoShKkpbCWsF8MieKNIYSgaAw0E0ivLUkcYu+\nqakdxFGLqkqxwqC0QwhDrDu0W22itqQTacrxNZyQXLuyh8lmDAZtdm7sMjz7KEJqJB5rPd3eCtl0\nTtKKKMsC75qQKNKKPC/wxjMdzeglLSSa/nBAhaOqa0zZEFhrjRKi8RBeNB5EgRcO5yvwhmIyxzpP\nGEUksWIYhfjB8cazSKhrCIKY/dmCyWzCjZ1tstpTuoBQauJYcPfmkHPr6wTM2TmfUVeGldUe/Tjg\ncpqCbFqEvPANQZygqFK8dQROIJUkbmuiJOap18HsG0aWbP8a4V2nsPs5virxsqa0Na2VPsY4wjBE\nRwHOOS6cf5mNE6d45cJluko0xUvvwZeoRUa716KsKwgD0jInqi1eSf723/lustm4qZK3Qj74G79L\ny0NVTGh125g0xRrP3mROIDTxtX1WVgd02lvszlLy8ZT+sXXe3nkHs+19AhQCibcwGY3prW9w48YV\nFg+do9tbwwmFDku6BJRpSitMmM/G9OM2Z44pBnHEWFdo2UjnIkjY3b7Jotqh3+nx4NmzXHzhZY7f\nfYayrAkDh80tzhnybIFxNV6HJFFCURZEoWJnZ5uk02tUvHxBuliQFjmBUqyu9sjTGVtqwMrWCQZb\nxxGqwhpF0au4mU+wCKbzHKxvuhyUQiEItKSuDb6ucFogtaSsSoIAfGXxeNY21qnLiqgVk5uqUcKW\nUnGvs+xapvGCUgYEWpPmKQ5HWZZ0Oh2ElQQCQqGwlSVQiiAIqKuaRHmGsaKoRpzqSrbCmHvXzhG2\n2swWOVErIB+PCakh3+fadMrb3/koV6/fJLuxTUvVvO+RcwgZsn5si6qqqKuKXm/AbLoL1mCKCiUg\n7jaS9q++DmbfMLKcOnGaay+/gkKwf22bZNBDRYrxaJ9ea0BepKgwYPvSda5e2iYJWygvCKuUbuCQ\nQYD1llBIsAvaypGnM7pxwuqgRdyJ2H32IwRArRS+Mrz9zBDvHEUaYRAUVjA8dpyHV9YwQrG6MqAo\nUtqdDR547FG8KXHpHH36GFG3w2c++qf0RiEvnD9PS2laKztMp1N+9l/9NKiAtfXjxJ2EbqtDN4np\ndzvc3N3m3Ol7uHZ9h5MPrKOVQiK4ef0VpJNMRiUbGy0W5YTpdEa/38GUOaHSSOEJQ4XQIYgK5zR5\nWjHOxgxW+zhXMxz2CaII4yV6rUtdGqazDGcFvY6mkxiKfMzpc6fx7RbZPKWTtNk4e4a1U6sooekd\nd3jB4RoT6y1VVRFohSkNXglKa9BKESCJkzZFXZLmOU4KQqnx1mPLmkgHCB1iyuogO8fiqeuSQDX1\nEAdIrSnrGlfVBEozn84YDofUGMIwoChyjAyZZhXeVPTaCXE7oaosxWJM4DyLRY5ylihKUEHI1vEV\nLu3sIsKAM2dOUVQZayogaMcUo5t4Z9FAUaZoXxNoSeVKVgcDVBSQlsXrYvYNI8vNosKmls88/Rxf\n8u5HMXlB+/hxstQQ+AXaOV46v8P23h6ddo95uqCzska207SNJ0ELh0AFFdJDOd3j5IkVtHaE7YB4\n2CIJQkya0UoCZnVBO1BUpiLqRXSSDrP5BJgx2RkzWFlntjdGSst0cpMoiCjKkm4SMt1LMSQ89czT\nfNWbH+NLHn0b1jd1gq2NFXrdNvPpmEFvjaoTUxFz7OQxZKRYH80JooitKGJHwu72hGef/SSj7Sv0\nwhZIRdBSaAm93hrloqDbTyCIkVqjowrtPd61sVqS1nu0wpDaFPSSDt5YWkGIEILcBwjm/O4ffYa0\ncHz7X383baUxok0+yymKiqg1YHdvgisW7Fy5TF0XdDeOMTz3dggaRUrKBtQ4jxOCuqoJvUVpTW0K\njGjkV5wlDkOKtKCuKrz3RHGMtwavFGVZo4KgWdwmIa0KwijELolXO4sUAmMtKtBMZtNmf20b6RmJ\nECFeaaoSSlFTVhWtSFOXBZv9AVoGOOOWKx3t8l4DtgklWz1KZ1mMx8ShItByuQTZkVtDJ+6BkoyK\nFDczqPD16fDGVfCDFoMzHU6cPs6izijHC8z2NsnmMUyRcenCK1zbbmoqk3BEmqZY56h1gHUeVdfk\naYk3lsrOgdv7AAAgAElEQVTNOTZoo4erZHnGhRt7nG6vc2U/JdYBKlqBziZBWxOpkNHeHlWS4OuY\nbqtLPt6jHFVsrA/J8jkiiPFeEoUhpjaEUpOOxtx/zxnWWhHhsE2FIxtPaWuN8hWDfps6TwkCx6ef\ne571YZf57pxItahNxVqvS6AiemfuYjjsc+6+b+P8409w5uxZKt/E9z6veObpZ7m5P6YqFuRVRZrO\n8MbhnKMUjo4McNailGY830MFGl2UVFUBImaoauZFQWUj9tOM1SjCAbOsYhAP+fvf98P82I/9Y155\n6iKrK0N2d7cPF48dVMEFYOqaKi+aRXKhwDmLQhNGMUJKRqMxxlhEaWm32qhuj6IoMNYivcM7gVS6\nWedvPWmWEUURvKofSwgOi49JkjQysbVopanKEhk0EK2tQ2pFq5UgXNOZ4ExNVpdoHVLXFVGikAKq\nqkTroOnolo0iZ6ylNgVKNsl9GESk8wVSN10AnWGP+gs1wXdFReH2kVFAL2qh1lcYZTmDVpef+4Vf\n4fT6KtrAmc0t0jRlqzekLCvi7gBjLBbHfJFzZbrDd/6Nb8dkc6wBayxOwu7umNXVCOEE00qSdNfZ\nswWRCPEDzaX9XdqdFfbSlKeffZ7+6jrPX3qBeZbjowGhsChr6LZiBr0uvqpYzKbUmyeZTqdEcUQy\n6BM4wXgyotUfEgZQWIdLC+ajCSpoeplcVROFMQJDnqYMEs1kb5vVYcJ4vEtvMETiKLOUUHn6nYi6\n9nSSkMn1m9x98nRTyU5CqrqizitcbcgWE1yeU4ymSCXQbUU6H2FdySRdkE8W3Kj2Ga622ZuMeOXK\nFd79nvfwod/5Pc4e67I/H+OFYHd/yjm5bJQEnHc471Bao0QTNlmawp81liLLCMMWSXyrEp7n+bLv\nSyGcoygrpAqojUXjUcI3i9+ERgcBpjYYY4h1cNgJkOf5YUW+rmtarVZzAwqtSdMUHehlM6ggCAKU\nUgQ6AiRKglSeKIooyxJnLSLgcBykR8sWsZbUxYKyqukOBqRljpSCRZFi+AItSsZRQrWYs9ZbxVRz\n4qgDVcFv/spv4KzA156IZb+sElhXkyQRxmZEUUhaVHzl+97N+VcuoBc3EbWh1VlhMZ+SLQruPrVO\nXaUE3sFsxMWPP47qb3Dq2CaimDHIUtR0h/H2Lu84cYZRmdPtH8N1FIN77ueTn/4o73r3l2JcSTaZ\n0Vtb5+Rb3kpSWbQpQQWMr1zj+Wef5uTZM3zkk59hOBzQj7rMqHnl5jXarRYXLz7FY29/AKSmzmvC\nKILa4rKapL+OSFMwBmRzhxVlamReEEuJsVAsJlBvoLUky3NCFdCJNLUxtOKAwpQUeyO2TpxoCoam\nRJQLyvmMerJLEGi0lZTpjLVhn4/8yUf5rv/4byL8jOsXR/QGKwgb4K3HO4dXEkFAOl8c6cJt7k9g\naXIZLRWmNoggoK7rpjIuRKOE1TXCG+raE7UCgiAg8I30L6VESYkzFls24yi5XFdjGylXKoX3UNc5\nxlqc9RhjiKII6xxVXXJya4soVHjrKIqSPKtwTlBXOXEc0+l0kFJjKsfq6jrgqB0Ib6mrBa4uUCJi\nNp02LS5aopTFlOXrYvYNI8sLr1xl9/plqk8WRIEECw7F1VFOp9untDURhtneBF8L9Nom+/mMJJTc\n/+DdqDii9ikXLzzL2WPvYJrOWO10iVVEstZl59o1kJ7NzTWqKueBu0/z+598nkHsSIRhrZVgnGcx\nh+4gJPIx2oFFEWDYvnoNU+QUsykrnSG+mFOVKfMipy4rqsIStRKshl435kvffA4vJVjPg/d9KdPJ\nDp1Ys3X3Js6XPPP8czywtYojIQw6JC2F0g4dKbLpHnvjPT715Hmy0R6lV0RBm0gCYcjU1nRabZIw\nIaLGlSVVvWAxn4EWtNcS2qcvcmZrj1ZygQvWc33vvcRxTJbNePKPnmBjY40H3/IIJ+55iNliRqve\nod0Z8tKzz3D6gbfgjSNIFDh/uAamkp7KWJxxBMuGRa01SjfVfi8s1hscoJVu7iugFEVWY7yjTBe0\n220CJE5AhUc6T5nmRHGAxUPYqF95nlPXNdoYlJJ4SvK8RqukKcrqAGErBIJ0NmdhDKUrCXRIFLUx\ntaPX7qA8qCUpZyZjvtjFuaap1DtLHAZAiHGO9fV1jDEYY5sCuapfF7NvGFne/r4vJZts0233CKXk\nIx/+GM++uM1gpUvkK/Zu3oC6IIlCnJWMFxnd48d47C2PsD8eoyWcPPcAX/bOLyZAoFXExcuXWVza\n5k+fe4pyntILYnqrA65cv8yZM2dQwQpXtkfEwmI3W7SCNsnqXaSVpXYeUVtWBn0uX7+BCENKX9DZ\najEb7RK4GGdBhQkYS1nntGmxtz/GO0G/v8o8zwlbAWk2RwURWV6CqgmyOdPLVxhnY2oZcNfZu5hO\nDEVREEcJcStgoxXzdV/3VTz+8T9lvrAYJyknY471YmJfogtPPp8xVYaVjV1OnL7A2rEb9NcvMWjv\nIJ+k+b2C8/BDfxv+dJrz3B//e1Qm5r4H3sRDjzzMv/7lX+b67py/8r4fZf+lq6Sm4tL1m6zf3bSn\nHy4rBsosJy8K4igC75C+WWGptW6SeWOaTmWplsuULUnSFIEtniAIGk+iFLUxQNOi4qynKEo8lnY7\nAecospwsTZFSUhQlg0GvaXC0jtoWCOHBhQjh8cYwzsdoqQjjACmbm0846yiKggBJXRSEYUAnilE6\nYDHPqMsCoQRG3lr4NZ/Pqaqq6UKWniD+Au06lkrT763gipKXXrxAGEWsrK9TlSkyq2jFLUQYkEQB\ni9mCe+4/h0lidJXTDyVxHDO9coEkjJhc3aHdbpG0OiT3neORR9/Gcy++Ql96Ot02IvoKrJJkC0Os\nPe1E022tMJ1sMzGa2XiXOpsym07ZH+0xPHEX7WjIL/7cbxG1FJEKgQy8YdjrE0YBxzePEwpBLDWd\nVoe41yVYGeLzglcuX6elNMPBkMl0B1/VtNpt9sYTtk7ew2xaMFztI0VIEEp8VQKecm+fYrrAC0U2\nyWkFcPJUSn/zRTqrlxhu3aDbv4F6wcLHgf+HhiDPAw/Q/H7BJvDl8M7vvcJbvvun+NgffiPXL72Z\nj376k/zgD/wD/vmP/kum0wUbJ89w8p57Obm2wqkH34IOg1tt9lLgqgrpPaYoUUJSugrvwC8XbjXL\nBWrq2uBUk0Mc3N2llbTI8qxZcg0Ya5u6mfW0um3iKDq8M2aoAtJsgbQerCWKIrz3hFHYFA1VsxwB\nb0kXGVEcLeVnT55ZimIBQBAotAqbNTphSKuVIKVcXpMgiiOEFERR1ISGQUAYhmRZ1oR5cYjQ5nUx\n+4aRBdPo8k8//hQdQvpao3yBtiWuKsjrGik03hQ88uBDTEqLKQqsaxogTJHRjRNsVdEJImxlUIEk\ny3KiKGE0GnHfQ2epswWJjphOJqy3e8wWI+L2KpP9GwS9mDPn7uPiM59mo7XJyolNrrxynl5vjfHO\ndfRbh7Q6Q/AJ4/GYyXyfh+67F49EaEXQ7vEVUY/aVkxnC2pTN6v1kj5Xx3tcHk/QSqADweo9DxKb\nOVlhuXFjh8XlC3RaLaQXrLQ7BIEkaCVMpzN0S/HwF7/I6Qf+gGG+Bx8Ffp2GIE8AJ4F30JDjO8A9\nAvN8ndH14xhXcu93PA/fBckvFbz3Z/5vLj/wNKMPPEo/FvzoP/p+Ll+4RJFPmV0NWOxe4/knHQ9/\n+T3AckGk8yRJo3oZU+Nd0yIShJqiKLC+kYmDMCQ3Od7eSsiVUljcIVEmkwlRElEvFgRSEbQltfBN\n645zpOUcgNby+NpZyrKEyhKFEc5YhGja9ZMowBhLt9fFSAhMgJcWpQTOW+qyxlQ1WkgWQFrnDPoD\nwm4HrEEuhQG4tR5HKkk2y7B1Qenz14XsG0aW55+9wc1LlzFCcd3XlGXJooYo7NFe6xKJkkff9hhx\n6EEr7M6I5194mSvdAFFZWq2Q3dF+M/OYgH63ha88stUnbLfpdRNQIUYG7I5GhFFEZeZYU5NlKbVz\ntGWMMBW2Njifke5c5dhgSF2VxKGk3xtQZAWtREInoswDqrIkShJ8lVFmU3qRAa/J0pLBsEfqJLOn\nP8a9p0/ixbJY5wxVuk3uDCdXV1mJVtl68CtJjUUJSzpZ4OuC69c+zpd87afYPPsRot8r4NuAJ4Ev\noyHHDwOPwtR02b405OKFhO2Pr7P4zTOYPCRImr6rn5p8iO//+WcZ/EIKXw6nv/dZ/vP/6hK7l9Zp\nr30Pw7JicX2MXKTcc89ZXh4ZjHeEUmKXzYUqDFnMJiilm4VgCLKiREqFMDlKAEJjlEBLTSgEZVHj\nfEGgA5RuKuLONe36WisEnsV8ivCglcJY3yxkowGvMQZ0Q7S69pS1xQhPDwneIKIAu7wngtCS2tbN\nTZKsxriajvGEcYRUEaU1BE6QyABbVzhgcURtM8bS6w2JIs3GxibT+U0Uweti9s8lixDip4GvA3a8\n948st60APw+cAS4C3+y9nyz3/dfAfwJY4O9673/ntcZ1piDUimyyh8lynIBhGKE7bUyZ8t53vxMl\nDFaFOA933X2SU2eON+16xuBxIBytrZPsXN1jPBszHu1QFCVbwrB2fIuJdWR5SZ1X3Lx0ldGiINYa\ngafX7cBlSf+U5fq1Paq5pBVpOv0OcZQgdI8qM7RkwGy0YGWjzxOffplzp7aoFhPCVkzpLF6DLGu0\nr9m9dolwuEmaL2h1OxhfYYuSoeqzN9pHoZhe2WbQ6yNMRj0bkxcL2r0XYOUDPHbmCcTPAP8SWAf+\nLtTfJLl8+TivvLzCxadXuflvu4z3LUooijpFKYFwO7SjmLADOgh4/Mkt/ocffyff+h9+lPs+8Un4\nLgh+KeX4z/w4C/lJ5vPvxoYhE1NTFTNq1V82FHr8slPYOku33WW+mBMEAaZq2pGEFWghCJUiL0pC\nFTVRkvcgHGEQHt43zB6EX1oTa01d5dRVTRAEt1ZZ6malibWWdqdDaTImkwlSapyQTVgWxwhXIZUk\n0QFIwWgypj3sN9esBNYI7PIWtJgCZ5v7jM3TlKqq6Pf7dLtdtNYYW1NXDtUwHmccpQxw4v97gv+/\nAT8B/O9Htr0f+KD3/p+J5jcj3w+8XwjxIM1d8x+k+QGjDwkh7vPef9Y60jCQSO/odTpY77GiKYit\n9BMevP9BVFQi8OgwxhYlQUtisxRbQ5JETEe7qFDgQ0G/FxGIhMHqCaTUSAS6spRFzsr6gKKquOfu\nk1yxHc4eWyegJp/NcUoTtvqsrg7odvo8/9wz7OxXBKpABAl9WUI5Z2+cIXe2sbrFr/zeH9NvJdgy\nozaesrLMphOOrw3QYYDuTRFhjxuTlLCTEMcBk9xQ6wif1wRRh6s7u7zp/oxW+1eI7vpVwkuX4R/T\n/Eba1wEfgPm9Xc5/7E2c/+/PUpQdxoVhtkiJAkE7NlTWsdI7RlGmzCdTnEsRRYESAfeePU06Lvi9\nf/PV/Hye8fd+8Qqdn5vDl0Pnx/+Q9S+LeOoP3s1iP2P9WJdga526rklo4nwpoK7qwyW3ZVmig4BQ\nxFRVhUWSGUsYJ7j/l7k3D9LsOss8f+ecu99v/3LPyqpSLS7tsmRZ1uJFljHGxvLaZm+zDTQ2M21o\nNx1shjbDGHowMDN0QzQNTZg2GJsGjA0YY2NZlmVLtnZLJZWqpNpyX779u/s9Z/64qWJrNBNMMPL9\nJzO+jMjIijrnnvO+7/P8nkIgVPXf+1yxDNDv9/9Okf+c9VfISimNAMeuagYv8NEC4iJDGEOr1UIp\nG40kx5A6CjNJqTs2msoW7HseotBIIQg8F61slKlOoCyKq2uksi7NZJ4bhCZJQpLGSGEzmUz2ayPo\n64J2y/3/tlmMMfcIIQ7/vY/fBLxq//sPAZ/f3zBvBj5iqni8c0KIM1Q36/v+/u9td5vsrq1hypxG\nK6TRaTG7MAdSM9g+h70yixd2ENLCiJIij/BsmzjPSOIJruth6xKRDjFpjCc12BVLV2YlZZZQU5I4\nnxJISTHdQTVbOKFFf3MTzxGgNdFgk8X5FkVecv3lh9jY3eP0qae46oYbsXsDAq9kNgjpbW9y4sgM\ncVRHKpeNnRTLslg+djmt2RZ5OsTyQ0ph73ePNNK1KKcxsdZ4UUY6HuEFexys3YWaex/1z0bwfwFP\nAP8KOAk9c4ST917B2l8sMJmOieIx61vPMp6kBLU6ds2n0ODXm6yurpOXhlroYNnQDP3KV6OHONJi\nfWuThcW38Qe/OebOtz/I/E1fhNfDzNkvQHElswuHMXpI03eROPuwvxJNNccpyoIsyxECsjy/5Dkp\nigzH80jijCxO8WveJQ99luc4tk2z2axsw/u24jTLyNMU3/Ww7ArCoXWJ8hxKKTBCVizkNEPaVQOh\nAk2KSoGtLIwxZEWONgbLql6KeZZj1xRFCcr1SeIxJYZpHCH3kwOkkEynEVqXJEmC67mUeY5l20ip\nSLOEuU6XwHn+vfBPrVnmjTFb+99vUfVgAJb+3sb4RyPyap06h44ewlOS+fk2tusyHI8ZTSOczgq7\nw4Th2iqFFviiZOPCOus7e8yHit1BH7/mY6KMa258MVmeUGpNYLv4zZB6q0mSarSQeIEPqkQVLjXX\nx2iJF9bo7+7Q7S5iWSVZUpBmKUIpVo6doDm7yJlTT3Gg7jONMpr1kAu7Q+YWFsjSMXVHMuMNyI1B\nizmMsfFqIeNpBJMxpYwZrO2w1F0k0wZqLuPhp1k8ejeh8yDid6iuWjPAe0C/xWZ340ZY/2Ye+cqE\n0daAzdULTM0Yzwow0sJr1DhzbpVGI2D5wDyD3h7CCFaWlyiyiOlwQG1hgZ2NDbr1HBXWcYzLxoU1\nmkdm+fSHXsa3/eQOzg2nEL+X0j5xP1/65IRGu8RuHWFF2BigFJVjMCsKjBHYjlsZpIqcNC9Qjo00\nimgyxXEc3KDKOcnS9JKHXu4DLMq8Op2ElFiui205QEngOZUCOM1BSISmkvFLsJVLUez7702JKEpC\np7raFfv6GCUltmWT7evRxuMxSimiKMVxHLACBoM9lmc7WMpGG41UFpNpjhCKvND4tkOuS7TRlAKS\nqA/a+2fZLJceY4wRz5/k9T/8mW85dBsBZRoTTwZMRjDOck4++SQ3XXscvfssxxoBcVpikoz2Qp1X\nvuF1bD79DPNHDxInCVZREgYBu4M+ruuRJzFtL2AqNa3DS6Rb25zd2uDi2hpRknHzqw9wz+fvQScR\np06dIpnG9EYjSm2YbbWQClZWDuP5NXa2VxnVBEutJgab4y++ls/edS8vffGLcX1Fo9VkFKXUOwex\nggCdFVi2xdQuqfk+C8sh2XRCbqe0Vj5Al/vg31Ndtd4AfATSqzsMzr2Kvc/fRN3tcu8Xv8BMs8ve\nzg5JWeAHNT77pQe47vqXce70WRItMLEhnCY0gpCnnzlDu9shmU5RSmJJmyyK8JSkFtZZG/cJZYi1\nO6UTQ779XTjvfR/8L3DNw08xOvWNvOiW2+j1BUrGSOpII/c9LQVpmu/XHTayLCreWJqAFDiOg9aV\nlaIo9aUaxLbtSyCIv03AFFSyH8uyGMdZpW4WFmR5lQig1D4I8G+gFs9d3fK8qiVsqyLel2WJbVcG\nt0taMsuqsK37MxRLOaAlJVXnLUljlFL7BNFKr2Z7Lsq28IIATXmpU/aPPf/UzbIlhFgwxmwKIRaB\n7f3P/35E3oH9z/7B84u/8sskkzGurbjp2st5xctvYzaYwUwnZLtbdBybbDhGktPSgrW9XYZPO3iN\n+SpUKM1whUUy6dNQkulkgK8syiJh2N9jaeYExi5ZWZ7j8LEjFMrC9Sxuuu0GiHNueeXL0VlKmWR8\n9KMf4dbrrqIZCLJpSVkIbjl+PZP+JjJLsQRYrofrddkbZ6RIRmtDsnxCp7RIC43tuGzFGdHmGru9\nbdLRmMNHQl76xo9hf2wNfgJ4N3ASJvI4/WdfTnzXNTh2iC9KRr09rr3iSu770iM8fXEdu95Cbycs\nL13OIw89Sac7Q6dVI40mTEYp2oQMpyVrF7bpdhugLbY2t/EshckS+hurzB1eocxdkiyns3iE+Hfr\neD/SQDkjrL8e4C7cRWfxrXSPeUwHO9R875KDUwiJ7/nV/X4/HCrOU9J9GftzCytNU+KkupJeGvD9\nrRZtURQVJ0wIpKhIKrmufDNBWKeIJpiyRBsQ0pBkKYUuCcMQozWW66KMJCuy/Y1gcBwL0DiOx3Q6\nJU1TkiQh8CsPTVmWNFotUlHVR0qp6vqn/lb9pHMcy+LUxU1Or24BBv3PJKT8BPDdwH/Y//rxv/X5\n7wshfoXq+nWcajrwD573/ezPsLN6HpNOEGnC2vmL1LtzzHYXCP15TJnjOJoyH5IXmgPHj7C2t0t3\npUOW5gipiLOMLJ1SRim+E6ApSFVJtzmLTjSW4+KUBUWRYJeKUgskGmXZ5CKByZDCDnGVYanu4LoW\nZWDIoxIz7dHwPHINTpKwlU3wZ33mOgHTYkRroYaFQ+BkZHk1eLu87hLLLlvdJk13m+Ov/E9Yv7YF\nvwl8Eaa125g8ezvxuEU8SdF5hPQLmo0a6aTkwYce4dlnzlGoEFPYXDh3Fs/3OXrgIFIbdJ5jOS6J\n1mxubNIMmuxuD1jdqCLtzPaEm68+QTHqYQcOXRXwxLl17FrAV6OLTM6lnPjjyzj03kfhl+Hof3uQ\n0dZ9pIMhsnEL4ewhcjPFLiVZmmNZbvV2FjnTOGcwGSEshWVbxPuJalJUC/BvM8aeawo8dzrkeY7Z\n54FRVsldjlJVk6Q0GA2uLYiziCyrrmBaiGriH08oHUmuBTqrrMXPXaXiZIKUEsetVdN9BFmakufV\nKMJxHCzHJt/XlqWFpkwTijJHYTOY9lmeb7E0VyMI6ziOy6fvffifvlmEEB+hKuZnhBAXgZ8BfhH4\nmBDi+9lvHQMYY04KIT4GnAQK4N3mH2FiKiFo1GqYwEMbQXvlCGlmCDozjJKYr33tMVxjCB3J0pGj\nGAP20Q7Prq2hjaKYjkniiKxIiHsjLK0ZTAbk0pDkOQePHGY8GtGp1cC2mJ+fw7U94jSlPTeLVApP\n2hRSMXPoIMYLkZ6PHTax45w0HeJKAXaEZRS1JOXY0TnUxi41z8LRDlorZKlpuDaZKUjTBF9JWnMD\njlz3QayfGsFdYL4I0/RfMj11G3k2JRv2SbOS2cUVppMxvV6PNCuQ0mJ3PMWfm2c8TVhotzjUbmAh\nsZTCkpKNyYSkVNStBlkBpZEMplMmuwOc0GJnEJH2hyz7s0RxzNz8HOvbG9AOGN1ykFHtevQ3vRf5\nEwmz2xE7jQdwWt9E4/AiAgtHN9jbexLLqlBHVXerBCNwPQ+kRGuDZYlLgz1rv9NkWRZlWf4diJ5t\n21WEnWWBEEhTpbHpsiTPC5SWaF2i04IiL3GVoNAlaRzhWi4IG0UOto0xNrosKpyv1riOV51kCoq8\npCyqE+6508OyLIosx3VdommlKBiOxtRqNdKsrJLXlERKh6IoCIPg+ffC/xPf9Z/jEUIYXcYMV89j\nZSmMEvrThNm5BYbRlMxolq+8nI999A+4/eaX0Wp3cFpNSm0QxYTdzS3aS8skuzvI0OOJxx/l2oMH\n2Tm/Rnd+jixOSOIUTWVGsj2PAkMwu8JgMqAzP08+HOBZFqMkQQXg5ilJVrC+tcVWb8iffOSP+KHv\nejuD/iabmz3ml5Y5/uIbiNcvEnYDkmlEnpWcO3eB+bkugbBxHZuMBzl4068jv38Kq2A+rijGP0qy\nfR3RpIfQGckwwarViErBeDjGQTIcRvR7PU6e3eLMxSE6zbhuaYZZYVBKkJfV3KJXVLSYcZKSFJJM\nS3I0UZ6yO5kgZMmVJw5h8oQjx48ziEvWtjYJLMHc0UPIQvNt3/EIs390D3wNdj94gKDxU3ju1aT5\nE1j2zVhcxXu+63XoMqyEkiaGDCp9cY7WEmMqRbgx5lKd8ly98FxNoZQizyuKp1LV9N93XISprmel\nMTiWTTydkueaoijptHwyUyJKUK6DMJKGo0m1RjgBQpeXJPyWqmqnNE3xPI/ppJrAh2EIcEnS8tzp\n5thVu7j6uyqJURD41b+qyHGl4P0f+hTm/8e04v93j84ok4TQc5nqAb6rGeyt02i1kE7Ao/fdy0tv\nvYX2jIdROSURBWAJzezSHMmkh7AraEGrVkNmCYutGr3+Dk3Px7EM0lKkOmK8u4vje/RHE4pGnd5a\nQle47Ay3qQUu480BZWHYm044sLiMU2re9Na3cvjgMvJgh+2jGTXXplQ5K/PzDIsxxDmzMwv4s4sk\nkxGz7Q65OcnizP+BeHMOAZi/cMl230e5e5Q02sMiZ5pOyF0XaQxxf8yoP8APWxU9MzGsnT/Hq152\nB89+7XHmlMaxAqRy0HmK8iXBNCUuNZ5SFdKnKMlNxfZqaMlUJzx59jxvvOOVbO9sULgBM50OcTKm\n2Iu45YpjpGoO8wP3II7CzC+s0l/7GCb7YeTxl2B4LYiYO77xdfzln92FVFWB7DoKU5TkxkLKKoPS\ntu1K/rJPyTd6n2rB3yBSn9tEw9FwXzE8JfQDhJKUeY6xBBNTEoQ1PNvDrYfUHKeq86VkvLtNw3EY\n5THCU1CoS9P+PE1AV63tIsv2vTiGLMtwHIe8KCjLstq8loXjOJdkOFKC4yoEFXfZSPCt5xdSPv9P\n/xkfI10ef/Ik58+dZ3Vnj+3+lAsXLvDAQw+wtrnBuTOnmZ9pkE4MOndJI115H4SiSKa4noMlBMIT\nPPLlr6CNYiJtWgsr5CjcWkgmBV6nje36BLZHEIacP/sUykzQVkyNHLucslBvMuxtI3REOt5gZVai\n8hHGWMRRRtcCS08xo13Wp5sUwwHdwCEe7RD3Npn21xjvnMep/R7i3+cwA/ojTYZrH6CIr6WQGcay\niDOBwcd2HMajKXs7fZqteYosJ5sM+fL9X2Y0mfLs1x5kIbCo+3XqnoNnW7hOiMlcPLeGbfnY0gVj\ncK9jDJAAACAASURBVKTAERILm9Cx6XgOs7U6m9tbeH4NM53ikRNQgTrEYo2vfXmPZ9YX4J3Ar8GF\n7a+yvvkyPPs3yJNlzjzwAOfOf41vfv3rGPa2mMSaX/zPv8T1Ny/zgz/8vXhuFQSUZdULyfU9vMBH\nOjaWbe2zjwV5XiKEosgMwmpihbO0Fy/DbS+iarN4nWXcxhIHj1zLkeNXc/T4lRw6+iKWVo5wzY0v\n4xWveB2+71J6CmEFZHFEEqXkaYEwEj9oIVVIrTZLszVDp9vF891q6i8FxrLItaHV7iCVTRSnCGmh\njUBZHkK4GAFJFpNkMeP4+T34L9g1rCgiTn7xbjrNJtrSLC4ssDfos73d4/LjV5AXKVYQYJQk2dsF\nFOc2VtFxyZXXXkGeF2SpxvFt1lcvoDAMRxP8epOH7v0yaTIlyguMnjIexVBoGq0Zzl94hrrv0W62\n6LRqDPoDZrozXH/l1UgFSZ5gezaPnXmWm295GXPtEKEzKEqU8ZBKM9zcRLk+VhiQuAHba89SQ7Nw\n47tRVwzh4zD2P0AZX4mSJTYp0XhKHieURcFkPEGXklhrTA6j3i7bqxt0Fw7x7KmnGO/ucmRmloa0\ncd0mqTZo6RJnmjjPiYqCSZIxTiIyNIWwSIqCrMjIdEat6SEdw8rKCjs7OziuzdzMLMPREMe3eeXr\nb2Gw8yVuv/6/wY1gzkti6xHiZAY/qGPFG/zRr/8kz5ySrO6sklk2B1+k+KEffTu/8NP/BZl36PUT\nHLdBXiZ4XkBZ5sRxBWpH/w2hXghB4PqsvOg6JtMI3/MIgoC5uTnCMCTXhu3tbUb9AXvbO4yTceUv\n0XDZylFGu6ep25pSu2T5CGGqrptlWVhWpT7PsuySPSDLMlzXpTCacZJQZjlKc0mC47rVlN52HWwl\n0ZQoS5BRYNKcD/zOn3/9XcOkVHS7MzRsh3S0C5sXmW732dvaYcOGmZl5onGffNIjnWpmlpa4+sgJ\n/vpzd6H1cfx2AzeH4eYGBxeXSZIJM50W0nNZeMedeI1FPv3hD3Hnv/x2tFKgS0yRk0UxKolJTEa9\nXieJcx594DGWrr4Opx1gpCEpE5auuBpdpJSWwHFCEgHSqdO/cAbHs1ACcmXRqNdxlw+heQh1ZliZ\n2C5vYJ++DqMjPM+lTDOEspiMI0K/hi6qqLZaGPDUY4+jdAZ5zPraWcYbGzR9BxdD4NpYlkagmGYJ\naKoiVlgoCb5tI7Um1uAoh7LQFUlyMqW70KIeevjOApPJmN7uFvPzi0jl8MgXHyIrU2684wC1a1cR\nX9SUr/4s7cY7ePJzdzF+4mluvuEmFg85aGvKH/7xF3jTnd/C//a+X2W0G/C6b3gJn7v7XsoyYzqN\nSeIq9NTz/Ap5LPeL6/22caYzglqTzswyn/vrzyKVQUoL3/ept9p0uh267VkOrlyG4zso6WKpSkb/\nl594klQWpFmJkAazP4jMsowoqhoQz7WO0zQlDMNKkmP2WWS1OjayktjsP0IIpmlM6doouX/jUy7S\nff4R/gtXs2DY29vD7nYJO3WGm+epd9pYvT4hFmlvhyIrECZjPOixdHCZ4eY6ridJsjEOLgUGzxEk\nUR9fgLI08aSH64foUcqxq45WIAYypBIUlsbKC4o0xq+5FPkUR7pITeUjKRLCWoApMoQtiVKN8D0m\ncYo10XidgPP3PUK3ZqM8RaPZZev0UwhLMXP8s/BJ4E7Qk5soihLbrvLmjZYkUYrvh5QG3KCJpQSD\n3i6BUliWTWwKSA2WUNi2j0ZilEBIg9IGS0mkyPbjvEscS1CWFmmZIXQJWiBlRW5ECOp+yGBvl1qt\nVnUdy4I0mQI5tXaTNFGM1y+ndvsq3AXBNzxMLN7F0dtexU60zd5uQnd+yGOrJ7nxlps4v36aN77x\n+/jkH36c615ygi/ddz9RBEoJLGUBijzfn7xLfSmd2XXtSgLjugRhg2uuvZ6DRw5UamNjkMrGUjYW\nVfxflKX0BwMs5bAwM8skisDWxKnBlDFKW/uninVpQz4XHRGG4aU6qSgLLLfSngW2u++IrAx3Wmuc\n0GcynRIELkIbBIZOWH/eFfvChRkh2O6PWDt/FkPKfLdJtBPzxOo2o8GUcbLHXGsOTxuQFqfPXkDU\napx55hm6y4tsbfaxbYdmrYFQqlLERgXKb6MzQdBt00vOkWNQykGXOVaak+c5rudCqTFKkZRjnE5A\n+8hB8uEumSkp8hSZCaLdHl7ZIVSSrO6jA0F3tksoNEZpPNvCOC55VuB3Hqw2y89APnk5pRb4tosp\nM5J0ShB4aLtgMhwiRQlGMR4PyIoU43qYZoeNJ07h7tNbMNUU3QiX0pQUGhBVJHiZSxQ50s5xUIjC\nIVcpRudoNIXWjMcT/LpPuK/cne12SJIUpEGnGZ7ncOFCm8XbgX8HZfmXEK0jnRDvpR6zcUBgH2dA\nxnnj8arXfBNfveczvOL229nY65OXOUoF+F6JEA5aVAu20EUF+M6rYeB4PEZIiRIOllI0Wz6j/oCH\nH36Y0WhMvz+oAOK6cjpKKcEWfOd3fA8zCwvY0kOZFJMOUI6DsQxpFmHbNQpdYEp9afCYF+JS29qy\nLLJx9Xv7UbwvkbEIAh/XdSAvMI6D2V+LYc0nV1+n3DCNRRk0uPnmW7FIMUVBWmRcc/NtmFgTy5TQ\nq6HjEbUg5MyZsxw9djkH5pcrtI3WuJZDb5ry1FcfZGGmgbYV42hKbgR5CYOddT71zGlSk2P7Hmla\n4rkB9WYdipJElzRaNbY2+jz4yO8wUwuI8xTbdasA1WxKkeXYYQhCcfkVx7kYj4n6u9UGunCesBbi\n+0MOpBfhcTCvUKSnr8ZzbfIiJopGOFJiihLQhIHHxtpFilwTeg67usAG4t6Quu0RRTHSC8m1ItE2\neZFWs4oiIlCSQpYYYVMYiWeg0AWZr5hEE4YxaJ3R8C3yaEqjWSOaTKm7LpbjEgQB69s71BoNRmnC\n00/VeOkPCuQTBifdYW/4CR6/7yR3vPHHmMpTjLbWmGsvE2WbPHTq83RWcu648w4GW1OeevwpHnz4\nLEJXchMjn6tTKgm8slSVVY+hLEC51U3AGEWW5Tz99NNIaUiS4lLg1nPSfYTAdR10UTkhpSqxbA+h\nbNJ4ikQyHE9xQ4+i0BQajBTVcNrzMQKywmBbVavYsixMmWFEgdYCbMMkqSCAUlaQ9SyNcd2v0+Qv\nYTQd3yXb2sBxFDKoY2uDSXbQSck4KfAbGjNaI9otOFQPEaM1pPTIikpbhBDMtEJuec3LKXc3cS0X\n3a6jPQ+RFKTjHq5fww08SiEoioJHH32Sm1/+SrI0QcsSdIooJWVuKLIYv91BeAHJeMh9n/0zbn/V\n7QwTqLsOcb+He+AItetvQNsetrRozC2Sx79aaRdeA5rr8fwOioIiNzi2hP1pcjROEabEsgz93oAr\nLr+aYTRhOozob+7Q7XSJopxEWERJiokyGg0fPylxEfjGEKiSRFdTb3SJcm2eOX2WwaTPgYOXYWTI\noL9BPZxhMh4z02ljScV0PEK6DosLi2zv9bHDEEc7DPrzdF66CfdC9zWC177jpxgNHqe3u4PvZHRb\nM6jWDjurG8y7Nltn7+eZZxK+/93fy/AX/zOPP/IExgQYqfblLCVCaKTtkJcFUkCSZFX7G4Vj+8RR\nVmVSKonWBcbIv0nvMgZTQBAEDPp7uBaIUpMagchLJArHdUmLnCwvybIc27Yo8xKFIIkiJFRBtWWM\nlBJLSdJS4YYNkqTK8RGWtT8slXjaRroWyv46BVYYCmY6NZwyYZQUNBoOBTFlqcmk4sLWDgcOHQcd\nM+ztUu/MM44ThFMyGQ7pLs4hACMjfNejXxQo26XMC9wQlGfhqRZxkZDqCmRg+3WUFkzTiHxavbE8\nrVGU6DRHKAdTxGRRhm27zHRnSdIJbn0OS0I9VHi6QZEVODLFwjB84iHCE5+7VK8Uo5soshhkiabE\nUorRZEI9rFEWKXmcMtNdpN8b09/rIfbZwvk0IpUOhRA8e/5pjh09Dq7HudWLbKyusjQ7w/Glebqh\nh2U7jKYJwyJiZ2fM4nyX2U4L3xJYtmHx0DJb4ymOq1mam6coMoJajRxNmka06yFYDr6wGJxfpnP7\nJnwe+jf8PmsTybXLVzLyXCbZFi2vyWxnnsXlN+BKTZ4+w4d/63fpdhq85R1v4+SjT6GpgoTyPK86\nS0VGMS0RusT1fSamxKAoC4OwFJPJpAp8NVQnUW4uXZ+EMGgtENJFGIs4irGVhXDc/agJSZwXSMuu\nVA2WizaG0PcpiqqIF44iH8cIy3D2/CbGknzjG95E0AyRumTYG7J97iTC5JjSJjM5Uit09HVqKxZS\ncnFtg0efOcXswQOUO1vUgxqjvT6nTj5Nvdnlw088QdMu6XZnObfRww9DxqUm9H1G01X6owF21fIn\nn8Q06g2Wl5dJplMsz8JCUmu3K4KlqVBLw3GMr23cboMsGoJoIixw7AiRlKxe2ODAkcsAyeLyAcLO\nLLgBJopJ0oTclAjpMMkyMBkxE5rWw/AZ4D/BdPMEvlVQFhnCpIyGPUCjy5xpPCWwfAYTzUK3y8UL\nF7BrdVKdo3UBtqRdSo4eO8a0LNja3GWwPaJR79AMArwixfdaRIXFRBf0CsH2MIO4x9ZggDRwZLHN\nfLtOYFn0enuMJmOm0zFz8zN4tl3JUcoUx/bIc9g4f4Ajtz8IPwbhT5/h6rlrmI6mWLV1uo1F+ju7\n+GGb3clHOdB6F7t7d9PtNvnVX/nfSeN5hlNJEJQV1E5WFMnnGGF5WZKVBl2AJWXFJbMUu4NeZcxC\nYHTO3xGmiwIlfJSwiSZjarVG5akpC4QCaSkcqbBsRZ7uJyYbgy5zPGGRTCaUsSDPM1JbMMwlk7jg\nQx/9JO985zu56+4vUeZNpFqhMJIkjStOmSnB1J53zb5wrWMDS4sLzEiDcmx602oKfs31tzJ/+Aqs\nomBleQkpDFmeMRmNUbaknkY4aUaZ5Ry68nLiGLSo7Ku2NJBlZBgCy+crjzzO4SNH0dMRw+GAsBVy\naKnLk49/BZuSYZLT7/c5v7rJkYMHuerEURbm5kn2+jgzHVzPZZKVJKM+RBHpVKO1wQmrt+jW+VVq\nM19h6Z4CroAsXMJmcf/+a5HFGbUgIE/2mVj7MeUIh+HODiqZYKTEMoblmS79/oAZ32USpTy22ifC\n43xq4cSa3qRHMV+jVZ8hkDZpaXhms880MewN9khNgVtrsXNum6vyjINNjyKOiUYD5ue7ZEmE4zSJ\n0hjHdkAYCpmx21tBv0kgTxpq9Di/+RSbF9uUfJXOzJXMtFewpE9oTynV3Xz5gTO87Xt+gg/9yd2U\n0xLpxvSnE7I0545XvZpnzpzB1ZKCgrd827eyvr3FxuouwiuhNMhUMhjsIdAgKj6BpgoYMkWJJQy6\nSJFkxNNtsjQFZVHkOUWhUZbEcVymaZV1WW+0qAc+SigKpRCtgu2tbYzjsba5QWk7SAF13+GPPvph\nYtpMrBlkqVFWiquqDKBS1iik/7xr9oVrHRuoh3XqSwak5jJ7FlyHaDggigbM+xb9rXM4YYcwDPCU\noZhOmExiThw7zGTtPCYaUxMBWFVme5lEiDzDkg7jpE+tXWe2FpAWEbOXzSGkjTAe02iC50sWswJ1\nfIWjBw+zub6Kb2LynVWUBl1MCFVOcmGTRmMGU+bU2g0+d9dnuPWGqymjjJW5BWorm/BB4A0gopvJ\nkxhLSNzAIctzsjLDD0I2Lq5hKYljKTZWz5H0+tiOC0LiSMG416PuhRjH4cz6gNO9MVM9JrIlB+sz\nmLIgEg36cUbDc8knEY5fZ5SPWWnO4dkKneV4nRauzJFCU6/7jEcjpKrmHoFfw5EORVYAEaXWlDKv\nuFymCgYKRIvrrq8x2v4mTm2vs7b9WU4c1zTCOxHl23jrWxcpCskb3/x2yCb82R9/GkGXRqvNJIrp\nzM5xy6238ZY3v5nNjU2OXXYE2WxSAFILRoMxGxvnGfW3SZOEWt2nzHUV0RcKLATKdvjI7/9XkiRG\nGU2elaRpjO16lMWYJImrFjmGIj2LNiUlhlKaKnXYrrwxQRAgCgedpJTCIR6NyURBHnaQ6YAkPsu0\nmOCKDuHci5Bq/nmX7As4ZxH4YUhvd51GMyCXhng4wAltovEIp7mEKyVxOkV5FtGwT2umQTnagyzD\nCxokRYrvC6bJuErwxTCcTmjNzeMJn0cffRgzP4vrWQgjSOMEowTSUUTDAYHrUo4TpDFQChphGy0k\ncRxhsowszwjCJvk0JkmmuH6bdBhDlNJuBmRG4tXOw6PAeyDuH6LMSvBtMq1RtoUkZ3d9HUSK79ZI\n+0PEtEdYD8hLQ5Jn7G5uEbjV3GCcaGzXZrHpgvJQOfiWTT0IaNk2geNQkzYdt0GjLJiqFJ2V2EZS\ncwOkEgSeg3AMrp5gCVBSUOQpUTrB932EURhhcB3DzEIf8RBwJWj/GI6cJS3vQ7TPYG29hKsv/2Ge\nufCHzBx/EXlxhq986fN89r8/y8Mn/4qffP9v830/9O942+vfBJHiLd/yrdz08pdhSYtnvvog/a0N\nnn3sQSy/wVW3voTZlUM0uzX+9Xvfw2c/9xl62ztkeUpelqBckjynoVxKU2CkxnZsiiLHCzucuPJq\n7njN7Rw8dJhGo1mhXaOcxx97jI//wcfI4pjSKkiydB8ILlFGIRR4jl0xyLwOQqUIZsnjU3TqNYQK\n0KVNbHukUe95V+wL1zqWhiiJ+dpjXyPSGuFIRAEGi2uuvZYvP/AYe+MejuVhJTlt32YUDdgYRdw1\n/SsC1+PAoYNsbG4gqdqOZVni2YoCSZompEXO3k6fLMuohSEokPUWpizphD6TYQ/XcomE5OL2Lhd7\n6ywuLpHlBZ6ARlijjBJmZ7toS2BFU6K05PzmNnZPIl2P+rEz1Wa5DrZPd2n6YRX4U2QoISqJh+OQ\nxCM812EYb+O5DhfXt/FqdQ5fdoLH7v0qTVGxhAMjOVbzOOw3kZbPZhQTZyWhUnSDgKbroArDnF9n\nNR3TD1zkfgovQuKqKr1LFxqDIU9ThK5hW4rpdEJ3pkt/OEDaksDzWDy8B18AboHx2CEIIRq9juHw\nVvzml5lEj3DNkR+gF32eh+9zueHKa/gz+de8+CXXcGHtr/iRd38v49jjt3/jV6n7Fs888AQLBw7z\nyFef4PKrLqfdnCeKRky3Nlk6cLCKtXMtPvzRj/Ctr7+T8VgglaE0BZ4rMeQo6XLDS2/if/6R93DD\nDdexvbbL5uYuYd0lnozQGrxGgwPXHOS2V9/GW97xNn7q3/4o/e0dkiSuWMrGkBSaNEv2ffclaWIh\nAlFdzTDkSYEWGbYUSGETpbvPu2ZfwGuYIQhDbnvFyzGOzdbaOsp1iKMEV8MNN72E1soK26ubdKRi\n9WsPckDUuf3otTRnOuzt7mEpyWDYw/d9zp8/z7XXXs1ge5Pu3AJr586ydNkxTFGQxDFBvcbm+gbL\n11xHtLmFKWIcx2KaZOR5xs0oaq6HNDm725sUtZDRbp/55Tlqto0V+hSm4I3f/GqC0EfZNrrYRPUm\nUIBe8GlvX4UkR6d7FMZQxinxaEKt7jPTPcBkNGYSjdnb2yao19HG4cL5DUSh8FTF0ko9QZkLHCqq\nfNdTaNelGXjMhYpQ2QgjadkWQVEwW/OJ8wzX2LiWrGTrtkCJDEdWYIZoMsGyLZqdThUeJBWYEksZ\nmnNnK4jfO0CWLdb7P839f9ngtd/8fp6636N2YoNx82eI4n/B67/xRi5u3M3PfvDHsYqX8O7v+35E\nnqNwuedzf0HLdrFVh2n0BQbxmMF4xDfccQcP3ns3t7deznQwJuh0QQmCsM788gHyC+tEcYE0JXU7\n5Of+w09x/Y238Sd/+Bf82q98mNHmB4l7MY3QZmV2CSFckIqw0eTf/Py7+MpXnuCm21/Mb/3Bh/nU\nH3+SD/3mbzHZG1TRF0Lh2IrJJKLlV4ANXVoo3yBtG2MS0jyhFAJPuUj9ddo6BoMd+JQ9Q2BpvG6N\ncZpx8PACWZrRiwZY2SzRsIeHZj60MUmCG0+J+wWhkigKROgwHOyxdv40x1dm8UTKyS/dzdLMDFun\nT+L6LllWcO7JbQ4cPIQe96h7kr31Xdx6DVva1GxDno9BgownzNRgyyRsrj1LaGaxXJuNx3c5sLSA\n53rsXZxUlMb6A8xHwLWgk8twTE6Rj0jjAeQl2TSpUE+UjPpDLGOQomBuYYHt3hjPr/Hglx7C1Qql\nClypCU2DXFmMswpAV1MSJwio2TYtXxGbDN+r4dcsOp6DFjmJE2BrG8eRWLZAibxSLNg2WVESRRGd\nTotkOoVmC991KcixpKGzuFltll+BB+4/iNM+w513LlE6f8GVVwUE4TLRsMNly7cTT1KeOHmOV7Vf\nxc994NX85M//GN/2lqdxsoQZF5I4YmsvxSoyRibm5OlHecu33skbv/NbcJXA9lzS/Vht17X5nh/8\nAa5+0VXcf/+jfPpPv4CnA37/P36CzdcnfOJ3P8UwmXLZXJuZ0GG5USMe94mEBcIingz4jZ/9Db7v\nve9ivJUSznm87q1v5o3/4u284RV3oCcRgsrFqaSLkAohBbZVq0YJSpCmVeCsUjZCKyz5D4hdf+d5\nwST6ICvYgVejRGK12oSNJrs7Q3B8lDZMd4f0V1cJPI/2/Cy1Tgur4TPII6TnMNkbUcYlgarTmT9E\ne/4gW7tjVo4cY1oI2s0WltcE26XmW6xvbSBDj1wZbNchTkvKQjPJDUkpMHnKaDJiZzAgxyLJc5rt\nGfK8pNFyObN6lmcvXkCWgovnNyjdZyti5HWQDJfJ0gRyjSM8TJbQbLUosJiOKoXCaDiiWZsnLW08\nO2Tn4gVMNqbVDip1rjG0XMVyu8WJ5VmOzXZoeiG+coknEfFkikDQCAJqrs9MrUbbcSmnIzoyoW1p\nahI8YXCkRKclli3wfRvlOEhKjCgYD4bUwgDt97A2K4lHseTwkjuu48rjLqsX/orE+i+06l1stctw\n9BnKdMSF1ftZmGszmvwy7/qB7+bI4dfiWjYvv/E60kKztbWHTnMG0RhtC4LQ5Xd+6YM88Ocfp7u4\nQCHKKlZDC/aylNe++nU8+eV7+ev/+GEOOjE/+uP/Bk+2efhPPsfR+RbdehPHCLTJyYqUKBvj6Byn\nNETRgPOnvobVkTxx3+NQGozS5MWIT979KV5y222YTOBaHmWpkfhYJqIgwJQThEipegQ2UgtSYUH5\ndZrPIoTAoIjznL3hENf3sRQ4M4t89BN/SjIecvOrX8v8scv4wqMPce1V1+LPHWc4HCJcj7u/dD/z\n7Ta9eEqr1qK7fIDHTp9D1WYoZhYpG12+cuosxy47iL24TJFNKIdDdp+9QJZEkERgO5w58xhPnT6P\n2whYnKlzoN1GhBZt2+XG226jVgsIux2UG1BsbVDGKQ+ffIrLjx8hXPp4Va+8GnR0GKVk5VnfV9ti\nDLbt7DN1LcIwJO4NadQDelFCUeQszc0jBxE15eAJG08KbAeUYyOERpfQH03I4jGH51dwQp/drfN4\nXhPfmjLc2qAZtpA6wtFguS6FlOSlRTTOULZGSg+ExFJO5Rj0LaaDCS863qtOlVtgY82jMX+WJx+Z\nY+XgS8k3FzGqD0HEiYPv5f4n3s6xA2+iVQoefyinOTvm//z1/wnlT5ibDRkNK39+VqSUXsDG9laV\nqbk3Qjx9ils3t5i77AhGSKQx1EKXydMXuf2d38LxK17Mf/3pn+PDP/2vObLyai48/ShBWbAcCp7Z\n6dG2BbljU0YpJrBBCaIswZDywff9r2ye73Py9NO880ffgbRdSgk//8sf4Jff/4vcc/c9lVvSUii3\nTWHPgWii6wcRGchCkytFUabQ6Dzvmn3hzF/GIJVCKMXBy47SnZ0lqDVozc/z1u/4dr7n3T/AkWMr\nLM3N8do3vJGZxSWCepOjV15NWeTcevNLOXriMC+9+gRXnjjI5QcWeNFyiwO1Ej/f5aCX8ZqXX01Y\nbtIYPsVKM+PY0YCm32e2FdGdS5mbybj15mW+/btfy1zD5fqlIxxttplJJXowwI4LNs9cYO/iNqun\nzlCzXL76lYd56ctuw3Zdgvb6pZMljw5VQDmqODchYTjoVQE8RU5QC9FaEwQBw709Fudn6TbqGFPS\nHw3RAjzfwQ8cbFEiKJAiJ0mnjKZDLEcxGO7x7OkzjKIpF/fWydHMLCxhey6ZUIgiQeUxoW3h2hWN\n0XVclFK0Wh08t0ZRaozJECrBaT1bUd5uhovnFvjMX+2irIsYL+euTz+MV2uzvnEX/fJT3Hrdn5BO\nTlGWbVqda/jvHznJ5z+9yrRvE9qCelBDl4LETEmLiLi3R9uxsWzNietezMUL6+gcYgTaQL0sue+3\nf5cUuOy6y3nJDa+iI8dML3yVVs2niHM6EqQlEZ7PsMjYSifs5SM2J5ukZcbZQY9TDz7O677zm1l/\nepNf+vFfoMg0GHAsyY+9/9/y8c/8KfX9CAslLGzLBpWQqcsoncMY/wSlexRJQCNYet41+8Jdw0Sl\nqp2MRxidISjwAwdNxmhvh3LcR+URACbKsPICKxmTWClWGePKAo8UlY4R8QQ93CUeD1FhHaklIjGI\nxjw5isJoZK1Jngks4yELF1m6lLnClS5pAt2ZObJiyvruDsYS6MGEeG9AoHymgxg9jVk9fYaZdp0s\nHgMpllqDp8FcAdPeHJayMaZEILAtl7DWwHJ9lhcXQUukFGR5QqNep7+7S7tdr+YxlMRlQqGqiIQ8\njRmN+0yjKdFwTJ5MaLbr2K5Hd65DarlsJZqdomCQFYx1SRnUGE8jdFGAztGiyhwpyhKjBYNejxJQ\n+/zfspTUZtcvnSxfPT0mTw5y7bWaJx97mpe9VrJ9/hxL9XcxE/4rErNDZyXmyae/wFVXv4VP/+UX\nydICXaZ4nsNw2KcsS+p1i15/A98W7Fw8y1Ir5LJjx7nmhhuQjk2w7zyOjOGV7/leLGEokilOOKLu\nKAAAIABJREFUfQ4tBS1nlbIYMo5SPOkw124ySQac3V3jmVGfJzbPMcljPNfGbs4xmg74vd/7ELuT\nDdYfPkNvY5dkPK0AfpbBchRf+Mo9zLZbWCLDyqdkozNYg7uwR3cj9v4cu/9Fyuws463/IYjo0vOC\nbRZtDAJBFOecPXOeZ06fZW9vj631NdLxlHPnNtncGtDfXCcqYra3NkjTkrKf8PSZizx57v9m7j2j\nLLvOct1nrrx2rpyrujpHqVtq5Wxl23LEgG3AGLAtwsHAOYA5pAs23IHBxoFrHLBxDhjLUVaWlVNL\nnXNXVXflsHNaea15fuyWbO7FutzrHzpzjD3G3mvt2lV7rPXV/OY33+99FqnGKnMVl8WyR13otPyY\nSqnKaqnMaqVKpd7ClxquH1NqSVbLHtPLRU5PLzA9VeTsYoXT80XK9TrN2Ofw7CwnlpfZf/YcZ4or\nTC3OMb06z2xxmeVyCcuy8H2Pqbk5quFRxCkJk+BGfZ0qkOuRSplIVSGMYlTdRDUM/DAARUOogjB2\nsVNpIs+nUanhu21Mwz6v2o0IQgXX9fDbDsVGm8XKKj0jeSDCUGx8X6JqGUDHCBLspkfeTVBjSdzV\nxUq9gRAgVEEgPZAdwznbMlFMDS+MUbUuKo0SPYUmHAN5MVyzfiPX7FqjPlUlCX0eeKCBmnVx4hco\nlR4DsR9buZ4v/fM0l+zZDkmWOBRkMzpO4JNJ5+jryjM+mCOT0ZCGSkhCSjdoTZ2m3l6l1qgQygip\nCHTVwB4YJk4SDv7gaZzGMn6kY6OjukWCpIIMAwwBU2tniTTBzi076E/30m1l6VVgJJvDNAzStZCr\nb7kOv9XmL3/z90mnu/iDd/0ZqtQ6Xsm0+eyXvsBlN16MrecwIoO8FqGhkYiIROroahdZ5X/jalgi\nQdc0Rvu7O6JD20BRNERXP0IoSDvNzMFnsG0NNXZo+GA0FW5+7evA6LAPeyZ3ohCj6jr14hpOs0Yu\nk8W0bbwoZGRyE4q6kVjNMLhJIREhShjz5COPsXdiE5qhEgud0XVbSKdNomqNMJGYuk6p2uDpJ5/g\n1bfegmg5hDIhncuQtmwi64XOeuUCqJX76RroQcZ1BB5etYKRCISMSaIIVZF4bQfbSFGPOjOqpmlU\nKlUm1m/gkbPPI6RkQgikDAniiEAqrJSKWENp0qk0fWQoHptmdbGCEyW4iobX9tBzBWqKQgsPeyDF\nZF9XhzDQ2QtHVVU0Q8NzPQxVwTAtFAmTWyKUA8B2aLgGBaXN0yf28aNvOrz/49fStXyGMzMlhno3\nYugVkmCND/7tIQ4eDJBKDsdTUZSESEp0VcPUFbw4wm2HbBod46EHHuGKS/eSNi3wE44+9BTbdu/B\nHNZo6AqFfJ5IkdiRQencMn6zju82CdUYJVERQYswyiM1i367m8ncADgRQiY4cYAZKRTiJhsu2cNy\nqc6Z+/fTbkd05XSeefxxWsttPvX+L/GeP/o1EksgspL/8x8+zj988PM8+KMqYVlHDQS60Ih0A6RA\nEerL3rGv6A4+dDRdkefg+x5pkaHeqNGT6yUIFUBh/vRpMiNdRCKhZ3ATjrtAa3kF1UgjhEGspVGD\nFn6s0XADenvzKCoEiY+hqRhWjlariXCbqCJB2CpmxuayvdvRvBqRtDGSGBGG+JUSBgpqELLSWKWB\nhZnOs7Z8DkOxGBoeJgp8Cj05Wlqr48PpQCY3R316hrTIgBqTMrPEUUQ6m8KLfRq1Gi0vwUcjZWWI\no5Ao7GAzHD/Ajy3qvotuqIAgURUcp0PXrS36fP+Zp7h01zbSKcHgnk30ja7jwLGTXHntVbywvETG\nCTj4wH1cnR3AskxELNANDcPUsCyr485iGyiKius6ZLt6SPeUXkrBTp7Q+fDnaySZ9fzp+15D3Q35\nzpeOsXPPrcikyeatQxx8VuH04Sd5/9+8nf/5Pz/V0XJJjWLJo1Guo6guE+PrmJ+dI5GSHTu2EnkB\nb/3T3+ah+x4iPLnM6bkGu2/Q8GwTdU8XQka0Kg0qazUarTblyKFebCIDEyObJ1A1lsqzmOkYU7YJ\nQ0Ghq4u1Sgk7DSgpyosrDIwPceToMdZffBGLJw/yiQ99kpGxTVSn29z7yYdZbM6h9AmuuOZK/vCP\nf4M7f+NN1JtVfv62X0RVdBLTJg47LQMvN16xYFGFCkKe70sBVBXH8cjmCriBS4RCRslB2Ka3b5xq\nbY2wViZtWaCYSDR03UQkAcLQ0BIN0WpgSBO3XSedzpJYeXw3BNfDThn4voBWDFZCqxlS6B1GVWxC\nv42meIR4pLMZDj51gE07t9NXGOK5o2cQUqV3pB83jIgQzM0vMVsc4pp3GShfCcj9axP73R8nmvpz\niCV2OkUraOO4IW4YoOkKllQh6GigWu0GIDCzeYrFGtJIWK22EMIgZWlIV5DKakzY3bDQYqbW4Psn\nj/O7//136LUNhG6TzSTMaC4lkfDEI4+yad3m87OIiapDlPhksxmC0EO1NFK2TUQHRBqGLYY3LMNn\ngLdAuZzlj37rzWjxGjt3tZhZvgZffoVqaw5L76JUUth7zWW8M/CZPqWzfddmDhycQ8oETc8zvmUn\n9dVZ2vUiCZDSFHLdWXQry5c//Gl++R//GkNk+OB/+yOu6CpQLK8SBj6ODNDbgKFQaqxQdlYp6Cl8\nwEPw3Pwp5t0yV1x8Ia2ZZWw1hW1lCfwEx5JEWkKrUWf2cJGtl13EzNk56o0mUkb887/+d8IkIl/I\nIYXk9NQ8xZUSPdkise+zbsMGYlV0XDeVFEEUdSCsLzNeMXcXN3bRhWDh1ClapXmymQythsPC8hKq\nTEhUHWFa9PeNkrKguLKEITTOzS9TSGdYLhdxWg1ifOJIEsUKI2Nj1FoNHNfB9dsE7ZDhwT4UGdNs\nlIiTgO7uAXr6h6hXaxiGRb53kFqtxLED+xkfHkcPXMYnN5OxBP2T43T1d6N5LrVGmyOnz3L4uX28\n6U2vIRSS3tEj9Grvh8uBz0Fj262Y1ffgeSUi3ydoNVABt9YBDti6zcpqiVy+wMzRadwo4fTpGdbq\nLk7D5ecv3cZg1iKWOh6CROicWG4wnhvgyAPPgp/QHNHI9g+xWKrgVNsIRwU01u3dRRAXGezrQjcV\nql6VulOnf3AAP4kY7B8lTARNp7P2euMffhNzow9Pw4e+eBk33vF3pNNHyOgeS805Hnu4zMaNYxw4\nfIjJiY3Mn13i4QefZXFGI1FDgkB90SKM3Vv6ueGCjRQMlTPzRQZ6uzA0hStvuo3x3VfzuY9+jAsv\nuRy11KRnYoRHv38v7/7YhwCVH3zwi0Suz/P7HmZ6Zh9DdoYmWRYcj5YmUDNZ0lJg1NqsHxulGPis\n1NtUGyVsoSASiZm20VWN3/7zP+aBe3/EqYNH2bl5AzfcejN7r7qEb3zt20hXsG6on/t/cDdvf8cv\nsRAs87q3vY1br7qGSN9MsdUkFZ7g8NSZn+ru8ooFS8NvklI1GuUSFm3iwMdvN3G9gDBSOX5qlmsv\nvxT0Tqdb7HkoxCS2zcz0CXoyWWh7tFQFr9HukIRVwfBgD26rgqJI8mObmT5xjMn1k6BItDgmRGKk\nOiXUwPOJYzCzBR6+5162DvbQZ5oIIApi2kGTlKFDGDG9XEEbHmG1tMauzWOkTAgdn8L675CZ/Xd4\nE/AEOOqdRGsX4zstYrdN7LkkQRMFhaxpM7+wCqicOz1PrdlmfnGVqiNxHJcr13WxZ90ohmbhJxJN\n1VlqBpiRynBmgMpCmcYLpym2XRoSXB300UGM0THWNA9LC+nOmEQyxBEeyJBcIU8qlyUMIzTDpuXV\n6R2OefVrvgWXQ+uMwle+8Mfc8Ut30mX0YRomd/7mpezZtJvv/+gxdl98GUeOT7EyvYDnK/iOQZR4\nSNkp/0sUiKu854230qX4mNkeqsU1JkbHeNU7f4VP/tmHee9f/CVf+e9/ytDEKAuxz8wLJ7nxzW+j\nvNrALzdJDfTw7XvuImiukdVUAj1H0t3FqflzmNks42MT7NywkWcfexzXbYNuEUjoSqUwUdAkiDBh\naMM4b/uj3+V333kne/vH0HULXwjGN+zi2Wceo3cwzc+/422M7tzMxMYR9HSedr3C7a/5LSqVEDU5\nx4kzB///WyEJIcboUL/66XTpfFpK+bGfFZVn6QaCjl2NkoT4TgvbVNENFaFm0OYVcKvEbR/dzONU\nKggc9Gw/7Xab/mwaU4mR7QZpTUHoKulslqTdIB05xO0qZncPE315hOxwQPS0BUGAEoSkDB3LjCGK\nOXTgGfLdPWR7+3CaZXKGTaPqIL0ATzrMnV1kdHyIVF+BWDbQZIznCjTVYvaZ6+i/6Ah9HzgFrwf7\n6c/guz1IfwA0g4rXwLSzRF6El5iEiU69XCb0IqqlGr4bIISJouuEEir1FgM9JlHooEuDblMl1qGl\ntrEmMji9O7FaHsSQUQQOUEsClMBDtwzkefGmr/gdjZiqEkURmUyGptPCskz6xhdfWq8szXczNlpg\n7tQidz//V7zz3Vu5cLfJq274Db57zz6+8eV7EaqNik3oJ0jFQUrxUr+WICQ2MkwvrnHN1lF0YjKa\nhreyyjc++GF6vYh//c07KdcrzDVLeLbFTa+6mkOP/BCh6NiKTjlYpu1XkdIlMvtRdItWrUmPbZPu\n7UKzNO564H4u3bSddODium0iLcPi0gLCMHCFRDc1FhZm+Yf3/TUZRWUtaJA3bEI/4MlnHsAQMXuu\nuIprf/62TklZJARKRCZl8Uu/8ma+/PVn8YMsnDn4U2Phv1I6DoHfl1LuoJNw/LYQYhs/RuVtBh46\n/5r/GyrvNuATotN88B+GjAMiwLCzqJqBncmhGTZGKo/a3c3Y6Bh6WifyIhqeQ9f4OMLoJYhDwmKF\nqNlGaioBCal0h6/Rdj3MdJpISCI9DZkMR6dnaToeXjug2GxSdQLacULZDTm3WOf+x15gaN0kg709\nnJ4v8sK5NR6ZnuVYeY19s8s8eXqeahTwwuwsxXKRer3FvlNnODozw4njpzlw/DT3fO0qWm/phxtA\n/HKMueHjqJZDqxWSShVwAjDtLNMzU/hOk9LyGlEcoCsKA90Fevt6qFTrWImO77mUK3U0dJJEoBlg\n6ILQCwi9gMBt4/k+LS+g7EWUmh5+O0IoBrpuE4uEiBaJ38ZK2fhRp7Oz2nQwDROZQP/46kvBcni/\nIJUss7p6jl9610fR9Pfy+X85jmVmaLYDNAzw6VilEkCsECeSUEIgIZQKvg8PPXeQhbrL1PQZEjUi\nsTT6Iw29WsYwUyyjsmnLhXjVJo889iz7j5yiWluj0qhRbdRZLtdoxgrtAISp0oqa1Pw2Z2bOsLy4\nwjt+9U7aQUC96aDFKroMGJ4YI5IaKT1P0I4ojG/k7Llz6KogIWZ2bZnF4ix61GDD5hHe8s5fRNVl\nx8lUGBjEoKn86q/cRtQ6StScftlA+K9g8laAlfPPW0KIE3RwEj8TKi8KI3RNw/UDpo9PUy2uQOij\nWzqGnWbr9p00G01Ozy/TDFp4XoxAZXRyC1HXKMfX2kRhiKYbiFaLOIrRBORDE0MvoGcsbC3Driuv\nx/M8ImJylopqmIReRG1pBcf1uPU1ryWJInoHU2y+yCSo1mm36iwtzbHp53dz3/e/z+W7tpJTTdw4\nYnhsFD1l41ebNBaWuWDzNtBUjt1T4JK//yTKqx3EBxpof/BPxPO/hiZTKLJjCZREMe16pxnL8X1S\nmTSKZjJ3roiqGAjTQBgdpkg9jshlsyixgm4YGLpFs+EiYg1NKCRxTBLGiEhB6AqmaaDqKuXqKumM\ngWHoHRf9QlfHUV5VCXwfO2PTO74ITwH/CPbCBF3dNS695NXEiSBJNDZM7ORV11+NodkkkdpBP5C8\n5MIihUB0mHqgKNgkSGHxrUdfYH13F5dbeULbZrnqYPetQw3rXN4zjnd6nr5sF267yVhvDlsJwO5i\npV6nMDpCo1mhr5BmenEOL47I2Dbb+voJrDT3fPcunCTgyssu4Uff+R6jfb302CZDPXkiM8vx5SXK\nB8+QTndRrpUJfB/dTNE73o+RNvizz3wEu5AhIsFQOihwJMSxRDd0Crqg5jg/W7D85DjPltwDPMvP\niMrTdBtFSizLYOcFu3Arg/ieQ6GQJZAJcauKbdlsv2IPhlAIGg1kFEIUohQyNNsgfQ87n8dtO+Rz\nPUzPnGPLUD9SU2g5Me1Kma6BIZLA60CDbImzWuP44RNs3LGN3u4c86dOkrPTECvYhsHpw4fwmk0m\nt25GegFZITACn6rvkrJtTp08QzaXp7xaoSeTY+nEFImi0mpkmHr8rWz6xmcRl4N1wQL9l/yI4rO3\nkgQRQsYoSae/RMqYKFGQms3hF45j6t2IdtKRlBtGh7cYRqyVSli5NOkU5LIGqhaRz5okvqQRNLBU\nMA0NkU2hZhWWlmdQNB9hZIl9iZUyabfbaLpJKpsha2fo3XyO9HQNViDcrZKdv47RyatptgK6+mwU\noXDyxAqmniWKkk7KJdSOIZ7yY1+ulxzzpUSRCoGQ1Bw45bm86bJdHD95DDOX59TcOfSwxU3j66gX\nG2ipbnqkgRLXMTSFuSjk8PQZUn0DBH7ASqWEaVoM5vOkdRNbKvhBROzH7Nl9MSI0Geofoy+TotGq\nsNY4h5cY3Pyq20hcBccps7R6kmJ5if7eNO/4gzu58dW3EtJxj9HOy206X+J8kSKB4toyQnl537D/\n8g6+ECIDfAt4r5Sy+ZPnzjNY/r+h8mQHjKkqECQufuQgm3Uqs7OYXovK3BS2GoNuoQYRedNCJCGF\nXAaBZGigHxVBu1gjZ9pkUimCIEJqOlGi0t0zxuDIMF5llbQSo4ctVpYXmTozxeU338C55UUWZmfQ\nEujKpGkWl3lm3xOkB/K4qY6JXhK7JLYg25NnoG+Y1fkVdNVACxLWrdvA4tIa5XaDY1NnaQQha4sb\niIwLOwSb/wHGyKGO2YKMUdHQhNrZX4lMao2Yg4fPEjTB9DQKsUlYbXVw16qOYZkYlkmz6bCysszS\n8jyIgDBqIaMGuhZi6CFCtkC0qdaXUC1QdaXDHdEV4jhGVVSymQwqHcjQ5iv2wV8BfwKVlQ0o8Siu\nv4FcPo+mqCASAi8hiXSSWAXU/8flE0knUKToeCnEcQwxRHGIhiR0fIbzPfRoOrt3bKPoCp6sV3lk\nbZlz2QKHpKA40MuDxRLHVlfoHRzGrTS4ZtseUobOYHcX3WYGI1Zo6hZrTsD1t74WRc/w3Ud+xOgV\nF6NMDnC0WEPLZhnr7WP+8BFEGLKydA5ThY1bJ/n+kw9w3S3X43kuHac6iZAKUSKIkQg6xLFvfet7\nCNUiES8/d/yXZhYhhE4nUL4kpXyR8vUzofI+8IH3n3+mcOmFW7j+ku3oA900my1W6jUy6yZYqZWw\nsl1oukGr2cDMd1Op1TFTNi3HRRcqXb15Wk2HNdfh6JkzDGzeycrCArXWMQr5HLV6myiUeMUSXqtC\n19AglUceYf7cLBs2baTYXOTRp55gaKCfdVs30dfdh5ZK49smYaPJFTfdjqupSD+gTx1lJJNn+egZ\n9EKWYhTh1BwUBUYmx7jomv3oy4c6SLwvgr9wKbouiCKd8lqRMILAjSFRacQaxaLDJCYUa0R6zNqp\nOcYme4liiaZq6Ok0edPCDzvrlFrDw03ACQWeUIiNNFZvH6GMiX0fM53GdRySWEdTBapiYWk2SAXT\nMBncOEturgwHIP6KwonPbSU9UCdlCAxTkCQBAg3dUnDcCClUYpl0UBKI82lYB/2NBHn+X7RUBOqL\npK60jowCBtaN8YOH7mdy+xYGuwpE7YRCLo9bLdEIQp5eOIfntxHorOvLs2f3hRTrK3SldELfx0k0\nBic2cnxqhl9461s5eew4zx84xJvvuIOllVVOn1nh9le/htlDx8maWZrCY2b+KD05m96dm3n/R/6W\nWI3RZAcCFUuIhYIUKhqSIEpQFHjXr72HRx8+iu8vIPgZ5S5CCAF8FjgupfzIT5z6mVB5f/4Xf9ax\nsUmgsnqaUHqUqhXsdBeGnSWoN0mkhqoKPBPM2ELJ5chYaQzL5IUnHmfXpm2UqmW6N25CJjFv7h3F\n0DS6xsfQc9vQvTaeTDh+4gTbdl8HxLSDADeMGe/tw1JNFqo1bthzCWrWYGl1iYGuHIUuGxHE1NdW\nWJ4/R6qQ4+Az+9m8fpyRkQlaq1WOvXCcqBWTUXVQoLfrIVJ8C24HPgzuBTuoPX8j0gxotQOCUFJa\nqpN4KtX5EqWVNgQQxG2GsJlHMOcY7CSLocQIEaKikEiBbmVI9BhVU9FCB02NsTQDkbIQho7rxUSq\nSdq00QwDGccIVenwSpIERVUQAtZf9Ry8E/gTmD6+C7N/Dw2pE6mgdArztJr1jsNk0kH/SdlJX+DH\nqZdQfvzZHQYlICCRsLxWZKB/kGKlyPrxCR556gl+7lW3cM+D97N+x1ami1W6+vtYPHWSO667hvnl\nFZpth/mVFbryBZZqLfpGh8n2DbDv0HFed8ebePaZZ1icX+Btv/hz7Nu/n7W1MrfecjPPHzzIStth\nV18/MmqhJZJdmzdxyd4LmX32AJuuvohYVdBEx6JVEzFq5KIkCoYwuPOX38nhQwt0DV5A0gjQYpfV\nyk/vw/+vpGFXAb8E3CCEOHD+cRsdVN7NQojTwKvOv0ZKeRx4EZV3Dz8FlaeITpqQyIRQCrI9A2hW\nhlQ6QwpJ1jLpMWxMQqhUaFaKxI0qiefg+Q5J7BF5bTJmmqTlEjs+od9CBg10EtRqhVa5xtGDh9ix\nbRsijjoOjnFESsZU5+dxy0U2TU6QTZkIP0QRHf66jkYcJfiOy+nDJzj53GH27tyFZVocOHiYUrVO\nFHUM5GQsWLdrma1774Jbgd+D2nWjNA69B7fq0lhp0Sy3EU2BHdk0Fl00tZuc0YVMVDzVwAh0Al1B\nzeXZf3gBD4VASsJEQ6DgRXBmdpWzqyWEUNGsFIZuEgtINAWpGAg9heN3FMZSKIRJx+XRskw0RWXv\nbZLMuVU4AMk7dIT7C/QMDTEyNoYSayR0sNnZXIo4jomS+CU+pKqqLz2EEMTnIUHi/K5klEiCpOOw\nousaRVyqpTJ53ebKPXuZPzdLureLA8eOMD4yxsT6jcSWydpaiXw2R4Ck6HnITI5rbnsDVTdmZm6R\nN77lzZw8cZLSWpHXv/71HD18hPLaKjffdANPP/EEQbPFza++HV9TqbZ8rtywiWEhcJ45TPnfHyRc\nrBEmMRAjIkFY81g6eZa5kyV++W138vSzR/CiCEfWcaIQX/0Z7VullE+8TFDd9FN+5m+Bv325zxV0\nzNGkKunpHiERksHxndTLCxjCw86YuNUGotHJiw1Vp1WrEiEp6D2Yho6ugx2EBEGAUASmpeMGPhkh\nOXn6JLqZYseOnbTaIa1mg6xt4TptDh86ysTERsIkYmV1tWPwoGmslerUKsdJElhbK9JsNunOZdix\neRsrzRKnTp2ikO9jqVKjUa2RSJXRySIX3f49lNdIeC2s/UKex77yZkTjFEO6RtoPOH78MK3VFuuH\nN1BfcSjX5vHVVEf6Ig1CJNm0QU7aLK5W2RT2YRogkg40NlEShkfHiYSC9D1IVGLVAEWhEXjUGy79\nw300/TqKZVKvlSkYOl7UkedUSyVSww/Dr9NZq5y9FHXWoVSdIyoMMThx+flrAjIJUVUFVZVESYyg\ns/HYUYl31OIIQUKH9KUpClJI5Hk7JVVonFqeZSxlEzkBvUYKNyegVkQIwfXXXcea2+aam29GFldp\neQF9kxP0j46RzxZ48NHH6SrkuezSvTz79HNIdK659hp++MO7URR44xvfwHe/9z1MM8W2jet54qGH\nKTttJvqGGR4fYnlxjiNnjrF7eAzx2a9yyV++l1BKFN/lhUee5u677+Ox/dO0/H6M7MVo0sCTYHVf\njCJd4KeXj18xbVgsI+R55bGqpxHnccvnpk4j2pWXdthbtToIGFs/ydryCmosSeey9Hb3UGs5YIBq\n25RXizjtkNHBIQ4c3sf2q65GNyyidpNsWiXf30NxYZlMNsPNr3sNzVCQVgRuq0G6K0ukSHrdgKcf\nfYZmtaMIGFk3SuI3MKyA2lSZrnQX9YZDtVVDaBojIy3u+PV7MN4ewyZo/KHNAx+6hbhdJlvTyMkc\nJ/fvQxgxfbkRTp1aJYo1RpUUwtAoOip11cJN2pgthWNJif7Q5Mnnprji4u3kiFEExJGKqgoizyVI\nIgSCRO34HcfEHH70WVAU8uO9DIxOYHUV8IVPCkGj5DG5e4HMXKkzq3xNY+6eKxnq7Sao1EkUg1jr\nLEIkktlz8x30ndBJhOTF2o1UgESS0AkMCQhF4MdRh8gmxUu4h7m5WcYnd1Fqt8iG5z2Pg5ih7l6+\n95270HvzpNI2r3vHO5iZPkdXbw+zMzN8/+7vsWnjdnZesJP777+fnt5+brnldr72ta+ASHjXu9/N\nxz72EQYGBtm+fTv3/+hButN97L32RqK5ObSGz+m1Ena2i7NBi3WrJTSpQgSf+Pjf8MPvHmF2dQlF\nSHx/kciIsMxJQukhzRQq/5u2Fb/zDbeRSlv0FHJs2zLJpZfspbu7hw39XcSOhqLpmLpBPfRJPJ/u\nbAE7khgJqIpKu1gFpYFumVT9BUY2TlJTHZZOHWf7hvXExTVUM4XXarJWLNJsNdk6uY7a4iKyUWJu\naY3erh7USFKZC9HMLM/vP0bT9bEyOUwrYcf4MLXFmKUjx9E9l1y+n1KxjhGr5PN1bnvH97B+vwMZ\n8j9q8PCHbmXIGUet1CgsOjRas7hqhtVikeryHLI3TbNRww4EbhCg927h2re8nZnPfIic59BQQjbp\nDu1mD8+cK7N3JE3aTiNJiMIQqZnEkYKQBrHQUbA6TvGBpKCpNGaKyAYsNWtceeO1tBQPU9N5yxun\n4DeAP4GVc9ewUjcZGummZ6tC3/h2dNROAMoOB1LTNOJIoiKIOw7kyPPBIM4v5BUhSOKx3sQsAAAg\nAElEQVQEXdNfYrEoQJgkHJw6SVBqIgs56mtFNA0UIbhw+wU8/OiTfPyPP8Pj+57mwfseRNF0jhw9\nSq1W44YbbmJ8YgOf/exnmJiY5NZbb+ff/u2bJAm84Q1v4KMf/RiZTIYLLriAe++9l6GhUTaOruex\nh+7l+okxqo5LuS0Z2bkXWUjRLk0hkzpRW+Odv/fHPPzkezGLc2iiiaL4543h+85T3Wbh/0X69YoF\nyx0X7MCybSY2jLJu6xiamSYiJnbrKFKiSYGMEvRI4rkuiWGgBj6oKr7rYVkqXsshY2qk4oRz+w/S\nPzTO5slJQq9JkkjcsMH82Rn6+wfI2RbtdotcNo/jurQaLQbHJkipJivLi5w6fpRUKo8S1Ni6aRNJ\nWMaNfBYqZXZs2oJqZXjwkcdwIugetLntV79N7sMOHIP4XpVn/uZqjnz3LBMjMet7Bnl8+TRrtRb6\nyDhzmRRFL0BJoKFKdENFD5pIXSH2BW1LkG/HTGh55qI2WwiZqTQ4kcoyhoOp6qhJBzkXQcdkW9GJ\nUfENg1qSkFdNlABEs8GQafH0w0/ROz7AL/zWIPn5tU4F7Ksac3ftxWsUmTseUGs4VJoKW6/ciBQd\n+nCtVkMmEnHe6ERIOr9R0wjDTrXoxRkEeGntImUnS1BUQSsJuWjPRUhUMpPbKTVKRGGA6oRMDvYz\ndfI4V15+JfvYR6lSY9u2XfT09BCGIR/+x49SyPfy+te9ia9//WvEccyNN97IXXfdhWVZ3HHHHXz9\n619nYmIdg4NjPPjgo6SHujDzQyylCgzvuYo4nSGzZQtdu/aiaSYrM7P0XjTBn/7F7/LeX/8t2l4d\nRUl39pHUFIH0ySQgxMu7u7xiweLIFtfdcB0pVSMlNap1j0RG5Af6cX0XNIEaSbqyBZ4/eRpdMWm7\nIb70SaKIxPNxQ59Fp83c/Cz5TB57cKxTVbFsZOjjmiZDOy9CNxRknCCjCCdy6NuwkdzmbUip8vwT\nz3B2fg6hmCzMzHDhzi14YYMrrriMttNkfPt20Ew+99F/QTMNtl4wwbar/4HubxThLkgehUf+jwuZ\nu6vGqGVSWi4zVW/Q8kM27dxO2fHQtBzDsUekG7gDPSRRgromqa9M8/gXP8hI4uInMbElWAwF6XaT\ngZbGVKmCiNMMFCClSUQiiBWNRFeJhURHcP8zz5HTO/a2s3pE2tFYjHwcVaF+donxrcfg/cD7oF19\nFcNWP0ZBQ7M0Km2XQqHvpYpXIhOOHD2IECqJ6CRmqB2b1DiOX+IyvhgoLy7wX9pmUzrl5CCOGRgd\nZn7fcWpdGfKqhSsVlktFBgeHmZqeYa5cZtuW7fQOODQaDR597BGmpqbIZtO8+efeyDf//RsYpsbl\nV13O3ffdg5nOcNnll/CFr32J8bFJBkfHePzhx7nw0t0MbtjE8/uO09JCurvGGLB1KuU1ThTKXCSv\nINOdRySwfnKMwG9hSgtPkwh8QstAhMso+CSx+bL37CsWLFe+4c0kuQJtJaKe+LhKQhwEiMjvkH09\nh6DeJD+WZeOlexASJjeME4YxfhjiNxtYiULgh+zafREoCrplkXcD1uYX6evqp7s3T743z/OPPs7G\n4RH8OCCbTtGu1qnX6hw/cgbf88kbKWKhsuI2CZwadrfBUw89RCqVoVxvc+TISax0mnx3iok9/8Tw\nc0sdf+Mn4OlP7KD6zBjjG03OVFYY376N6SceJ+VFlI+cAkMQpBVSQkf3QrqqNbJ6ghdEZGWMTCJC\nIVHR8ByXfDaHSsCIkWPB91lsakQ69KgKWd1Cmp1d80QRSF3n2NQZdig99NpZjrVaVDUIFa3T45M2\ncbwV6AWehNSvHWCl93oyI5PkDYPRzSbSHkXoOgiJIgVTZ2bwg4BE6bDt45dQED+eTeAnysgvHZeA\n6MhiAC8OGUjnmWs3iZMExVDpKnRx+MxJLrzhWrpHhpiZmmHm7FmEImi327z2jtcyMDDEpz/9abLZ\nLNdffz13ffvbjI6OsWXrNr77vbtZt24bGzZs5IEHHuKK629AqoJvf+eH7L7oWq654ip6sxb3/eDf\nefwHj7HvuYd5281v4Mv3fRs/ibn7+w+AkpCEDhEhigJqoqLFgCpBvHw/yysWLD2pHmQsyJgp2l4T\n29awewvEjRaqZRGaCWlbEi4tIQXIKMY3TcKWi5bKMHd2jnRXF919fYR+AEhmz86B0JgYG6dT24ko\nLy2RVRVEu0U6axI6DstLq0xPzRIEkCSysw8hE5IoYMO6dYRBi/7+IVZXSkydOocQMRffcIZdVz5J\n+p42vBd4EKbuu4Du0zeyqs7xwrlZjPFhHjlwgGwYktVUmpU1+rp72FCUzClN1IFuIjXiXBgSZlRM\noSEaAaro6K90BMPrN7A6fQKttUZcC2gN9WH5FiEJkRliaAamahBJgxemZxCoxGqI7kR0SZ1V4dNv\n91DxHFKFHB/7024++c1l7NeC9qEiI3d+hpOP/Bptx0WYFvmxDKncMImUqEKwfsP6ju4L/sMs8uJ4\nETj04vkXg+XF3hYFga5rLK4t0QukbJu218ZtNcmk0gyPjvCpT32Kd/3u72DbNtddd22HLADs37+f\nL37xy1xzzTWMjo7ywx/+kAt3XcDExCRf+upXueDCCxgeHOH+Bx7m2muvp+o1eP6Z4/zcz72dnv5e\n9j/9IM88fR9Z3Sdv6Hz+X7/J+t17qFUbKJbFR/7xX4gxiBXwQgXz/N6QjENiofGf6H3/w3jFgsWw\n6NBsMcCDVCFFw6mTVgSxmqDEKlYmhbfawDQ1QhGTRJJmaYWlaoNdl1+Dls7iNRpops7c3CwDvf3Y\nqQJSxjiBi+IIMn0FGqGkSzepl+ucOTlNreoiYxUvcYmTGN/z0XSLTFcvDz/2JKap0Wy3EIqFXajx\n6++9j/yTbbiBDgzmu/DcwhaWv7uF0U0FujRJoWDTs3UrU/es0GMVqDkN9JEBzjXqWNik0fHKLqqV\nYcppMqllGOzvoRwtkvgdRomuKSwVy6w1Kqwnzc5YZ1kTuH6IamjUYo9UpKOonT/j8UPHsbDo7ulC\ndy2G2zbH4gau5yFMg7DSYGnF5u/eN8Rf3LWMcgWkt5+ld8tXmXn0ZjZsHOzo8JTzFS8k5bUqqqIS\nJJ0SMT8RGPBj4YsEOC956WDIOu9NpEREMU8cOsC29AAnl+bJ5TNErkscS4Sm4LRbfOZTn+Y3f/NO\nTp06jeM6FItFFEXh3e9+N/f88IdMTZ3hjte+Br/t8d3vfIfbb7kNuzvHt7/zba6++nKkjDi2/xRv\neO0bSPw6n//0Z1DcOt1ZnbYjaDcT/v5Df8ftN9/Ev3zqS+y98Tomd17MmVOHCL0UURgRxy2EUAjV\nkChWEDLg5cYrFiytlQpKymSmWEG3LBrLawRJQCPyCaIARdNplUvIJKHZbGDYNqXVNSyzwNj6HURG\nDtdPKJUaZLqyrL/yMlRNZ21uhcrKKut27CCSLpl8F5f13UK72ebYjx7EyvdhJm1azTYiAl1EdPUW\n2LFlI4MDvcwur1Kr11laWMYPAy6/fT/5j7fh34H3Q+sGmxMPXEr17w3GbryY+WYD4QcUhgZxlZjR\nnl6GXJdcX55y4IFhs9huMNtymRxZj1Gu0KsZVOst+nWLQV1hMdCpihCz7eMtLjCZHkBrugzJCHXN\n50xBEHfnSccagepRiyVrtYiCaeN7LaJQ0FZjhrB5wpQMKx2sqKlapHIZHv9+yCc3dvNb36nArTBx\n3wms23YRt6+kNzdJU6aQUqAqOk89/SxC15F++NI6BTifXHXSsUTyHypHLzLvk+Q8IFYRrK4t8/ZX\nX48iwEinySoqrufRCB3ymTQz5VUeffRRxteNQRyQhB679+zhk//8aV57xx2s27CBtbVV7r77h1x6\n1VUU+vv46le/yt5LriCd6eOe+x/j9lffzpMPfZ+12ZNcsHUSJ8kyPb+MLztlbUPCSqnM8MYNPPnE\nI3zgb+7kY//X53j4R1Ooio6m1UkCEzV3KbEekk4CYP6n3rOvWLBkN45h93czIAW4IdgW0mvjB52F\nvqLphJ6HVg9YW1qiWCly6ZWXUUhnUaSCE1SZm59jdGKCvGXhTE/RavmkBoa5cPM2VuYW6Rnqo7Ww\nypEXjtCs1nGCAEPTUaVAkZI4dBgdGiCfzeGHDifPnmFufoUw7jjQxzJhYGwB7gU+Ac+Hl3PsY1cz\n/9hhrhro48DxwyRnl1DCADkyxOH9yyjLRab9NhGC8cExUsLER6V/cARfxmhRzKiZ4ly/zczaCutS\nGUQUoEgPRTNJgohUf5pivc6wbaA1q6QKXSTZHubdANsHGcScWVxGU3QswyLnJhQby2y74lauHR5i\n8cn7sWwNLdHp7ulmvlbkK/+UZsdOnes+uQpvgP6n/40DZ1ys1PsQqa5OKCQJlmniRTFSSlS143aS\nnF+fvJSC/WSgwH9I1ZTzJeiW51Lo7yE66pM0NBqeQ2BEREC97pC4Hs8//ihra1sxbIt8Pk+pUed/\n/OHv8ZGPfZTJDRvwvICbbrkF3bT48te+xqWXXsmWjZv4t29/i6uvupDvfOEjaBpcefXlnDx1grVq\nHamoRFJCGGJqKs/v28ftr70dL0yolUt86APv452//DscfP40vuIi1EFCUSOvdWHp/2mD5E98t1do\n6HFCPL1I6+gZKgvzVGemSRptkuUKqVaAXm1hVFvUK0WKpVV2XrCTXCaPZRksLM/TbtTZvWsnlq7h\nNOqogcf4wCC2blOv1zEUlcSNOH7gGKViGdePUBQLL+g4RAZhx4tYNU0a7Trz8wtMnZmj3XZwHL9j\nYCBa9HaV4AgkF8GTd28imS2zox6SOrvCld1DXB6mGWmHbIk1RosOF4YaN2j9bBUWq8vznCrN4SoK\nvesn6enpxQlbCNEhFDtArGpsuv71FC6+GRSbLkXH1CwKo6PM+T69bVjnC/pDAyVQqLRjQqkSxwlR\nnNClqAyagoHRYWaPHeKmdZuorK2yOrdAJARqIYsTRIz0DPDxP+7n8EQW3g3iTXDh1fcijemXdGRR\nHFMtlYFOBpYkHZ69/E8W+VJ2OJDi/PH4vE+wKjqCy0CRpPM5No5OUMjaxLqOIj2ErJEfNFDTAt8K\n8Yhp1JuUKhVWSyVOnjzNn7zvz7jlttu59rrraDQbPPrYY2xav4Err7qUb37761SK83z3W1+kv8fG\nsgye2Pc8S3WXQOh4cUwUhSSE+FGIoen89Z//FRddsINatYaiKHzuXz/EYC8Yaoxpg5oU8etneMsb\nL3rZe/YVCxbVEDS9Jrahk0sU7BiCtQqGIqm3q7T8BnML50BV2LprJ8uLy0gEczMLjI1O0p3rZenc\nCvVqm3LdZbZY48CJUyRC4IQ+1XqV559/ntmFeSr1BiuVKsVmg5LThJTOwPoxpJWh5oU0Y4Ebi04H\nYNIxqEtn06zf0UbZD+yAarOXLZs20hO67I01+ppttDgk39tPkksj0wpJ5GK6HlbYZMzW2aVo7PEU\nml7IwsoyCytr5Hp6qCQ+aSdm+9ZtVJwW6uhujOvew5LjkQMaK0VUoVAHllMq08Uih44dR9i9uInA\n8UNiVMIkgThkSDPRLYv1TouD//RRBkWCranU6w2efOEgmp0iCiL8us/n/uJSVn9FhQ2g3hlSGHkf\nitIAEoSi0Xaa52eQn7hY58nQL6ZbnUMvFgHil6piL739fLXu0aefIG2a6EpETz7NrvFB3nPz1bxt\n907ecf0NXL91F06lTdvzMHWLHTt2MNQ7yrNPH+LA8yd49JGnOHzwEN25Am996y/w+S98ktnZ4wRu\nA1s1WGqGrHkxrqoRCIWW53VmFTrEZCFUFEWhN1fgV9/+diZGR3nyqWfRUlk+8YVPIc0MMrKwZYwa\n1rnppr0ve8++YsHy/NP7OHz4GE899xxP7X+B/ceO8sLx4zx38DjTZ1c5fWyWdO8o5PtotkLarQjX\nCRlYvxlfgjnQjT3Ui6pp9PQPsnnzTiY3byZ22ihuk5wpaJdLhI02MozpzufoTqfIahopFdxaGaRL\nFLh47TZe26HdqKMlESk1IQxcBkeWOh2FV0J9eZRWbY3CUC+psQ0E6RzVcpOVdgUqdeIoIdYUlC4b\nK5Mim89jJgqJDuPSY53QiJttHMVAzw4wtGMTU4vn0IRgcf+DyBOPkbFtotijJOssN6uUc2kOiQij\np4eB7gwZUURTYxwnRJUKhRgsoeC0A3r0LkpCIZ94bBIpUpHKOlWCVNm5cTetpouIE1bnPf7pv12I\n91EBU6D+3RL57DtBhigiRtP+F3PvHWXZWZ15/973xHturpw6R6kldYtWzmoFMKAGkYwMzmBg7DEG\n883YHnscZuzBJjiQwUnYBgmwMEpGEkLIVkChgzp3V3dXdVV1hVtVN9+Tz3m/P251Sx6PNesz3yzN\nWavWqnvuunfdqvXuu/d+9rOfR19BueiSKKW2whlLEQpUkiKUQKguUCy1eEXUr5ttEpWS0oXE/+7R\nb9Mz1o9pw3DO45bL1pKzfQrFhHVFxa51A7x352Z2rRvk1vWjDM2d5eYNq9h9w/UkQUSr5bJ+64WM\nrBnmt379w5w++hK6UigBHgmNSBEiCVyP2O9gGRpSKDRpYZo5nIKNZVskKiWfyfHB930AyzT43Be+\nxJ6jx/j4H/4eod+lUxXKBartV2/wXzN1l/r0fgxDx281MVVC2PIIiFheWoZEccG2bdRrNQI/otFo\nMDo62i0V/ADDMhGmQbPeIGsZBK6HiFOk6or2LdbaHDo0Thy6xAoaQUBpcAAVgopjEIo0SbuoW1c3\nlcD1aTRqmJaB1AW2ZnLLT97H2Mcm4D3wbOYNFMav4/h3nmJ7VEQszLG0bgC7ETE7NUF++zZe3LeX\n9UG3NGkUTXwvwpOSZ/WQzYMbmW/UmWxXKBXK+HGAEgk9vqJgaMi2C1FEU0s4Y2hoZoGBLVtJpyeQ\n7TYyjUFP6RkcZT4Oqfg+se9jeS2uLQwj9AzNRpueNCESiuNJwMCG9eydX2DQyFIoGKT1Nr4b0mPn\nueJ2l/f+9iHk1cBnIL7jE6jkA1z5ustYrHqk6St4YSu9ystlmESpmCSNu573wkIRIdC6KwFSEkuF\nJTWsGC5d18tvvuUGCsqknYCmQRiBIS3SpMW9B8eZXgxZkzrE0uHOX/gQzsgAX/zrL3NyZorTp8ZJ\n0rA7PBUaqRJI3SRB6y6cKYUQiiiKMU0Dx3GQUpAkASRpl4irGyBg9ab1XHXtNSzXUz760Z/mr/7y\n77nnr/+c9/z4rXR8lz/45Bf+/eou/6cufXoeI5dFy+g0GnXKpsPcmSlqbpvLd1xOdaFCZaFCz8Ag\nY2NjBEFAEscrBzvG0vMsLC8yNjRErlzGa7dpNVucPHqEZjui4ackKYRJCEowO7eEplIM08Z1g64V\ntUgg7fKeVJoiNJ0gTknjGFe59I/NdDPL56C8Zyf1ZkqY7+HQyXkylmIWH3CpWwlm3OaoneKlMesS\njbZSGMrA7kTEdkipWGR6epI1WYeF2UUGizmsYpZ8T5H6zByZWEcJG4yYvsEicaqTkHRp42HAKimx\nQ4U3O48sZ1FpgkhBGxvlmfkGPXqEHccMWTmsyCeHwG97DBqCTHMJApOsk0PXE3wV8dTTQxh/P81d\nv9OAb4DYvQepaQwPj1BvnsEPwvPI8bkgOUdtEXTLNEM3yTgGKtVRRMRRiu/7pEhIUmIFIYoXzsxw\nshNz+XCZchIjBEQdH9NOCWuKvMgxH9U42pzg4rUX8I5f/hne/M638MzzTxIFCUolKOhuMmomCI0I\nSRwG6LogCWPSNMayLDKZruNwFIfINEYq0IREJQkIyenx06Rpwsj6HXzuK1/mY7/0K7zw1HfZvfst\nfOKPP/nqZ/b/eFT8G5dWskmkwA07ZDSdEwePoDIGWSvD7Owcru9jZrM03Q5n5xcQmkTTJUvLS7Ta\nbUbXrMXMOEw0WoxpDi035qUDJ3l+/z4u3Xkpjp6lKTRklMHBwUhZkQcyMTNxd58mdKkuzBGFMXam\nK5sjTQMhBP1jbazpCByIenIcf2aJTRdfy9MPPsZOzaR3aJQr33ob3mSFxalJ/L4epjXB8qET+J06\nkZGnXa9TckpkE52Z1hLlYo4kjuloMYFKaLSaSK8FkYchbAQdcipDve0SxDFLrWW0NGFEghP7WKli\nUUuIlIORagRCMbTpQp6ZfZaBzRvxDp8kSlK0KKKYM5iZX0A3FNbKvklISqLD2bDFyPBmxo9m4bYG\neAA+y0vLHDt6tLtHI1bWIekGi9Qkw8PDCAFnz851SWMi7cLFSYTUFGoFYBZKoqUKiSRVKYGe5z99\n6V4+/5GfYV0mRQJSS/E6Lkr3eeeOLVy3wWCyPs/vP/wIdjHDc49/H8sU+LGHNEzCVENJA01aKAQq\n6fqIpqqrG21ZDsVinlq9jiZFN+PRHbSKtFu6KaUYXjWG1BxmJo7Sahd5/PGn+aNP/h6Tx4/y0Q//\nCl/9+v3/5pl9zYLlhyeOsW39ZuxYY2l5mQt27sQjJhYC3XJIgoT6UpVGa46Lb7iaqFKntjxLwRnG\ncfIQpfhtj3y5zKmjx5mZPovKGbzjP76bU7NTLJ2ushwuMWAMIoMElYCuKZI4IEWR6hq+nxAlPmmS\n4HsJmqbheQmWqbNmQ+V8vzJ3qpdqM+X7f/cQ5cWAvoEyZilDy29QPXKSilclCOusWb8Wv+3T2x4i\n1BX+2Bqe3HeARcsiKxJEHNNUAdtHVrNQr1JPwekk5JRAJREaGo6hs+w2MfIl9EhDpG0SobEUxtha\nhobUiE2bRIQkSvHCdx/j9mtu5rFnn+baiy5BOzlHIi2M0iaaHMNIFXUV4UcJ2zds5qXnnqZ3aJR6\nfZGeKA8O4IIXdHj08RdII0GMD5hwLqugMEyd5doSuq6jGTpxrBBaCnqKUAopdUAghUCQoIQiJUGT\nAtIYz3T4yKfuZt2gxU+9882U05RhpUgzDs3GIrpdpt5uMTo8yHSnARHUgpBEpCSpBMNB0v0iU0kC\nKkEIiaWbWLqGLgVLy/PkslmiwMM8v82pMAwbP4lYs2EduVwOJ+tw+OA4laUF/qb2FT78/g/Q8nwy\nnVc/s69ZsFx2xQ7OPHeAkUKZgqE4tf8FPN+jMDSAYVi49RamoZPXE6Tv4S8vklMRTr6HpcoybhAh\nUsHx4wepRx5WPs9CZ47D33+Esws1dFlApj4in8HMZNAMhUKjo8Ucq0zik5DpgJEG6KmGiGOUiEji\nCDO2yQ9MwkPANXD65AjaQottpsWAErw4cZANi3mKZkJ09CShDKlGIWptm+kzp9DNPHpGRwiLqzds\n4/Fmg1nDBi+inDWZn6lQtxQzOUna9hiTBpbUMOIYqULypTKtBGwRkWoJU4FL3jBZVehl1UiJuaBD\nkArCJCEnNLz5BTbkexifmWRV6MLazWz97U/ztZ+9kTXFMmOlHoglZ6tVMk6RHrtEppzHap+BDOBC\nELpMzzWJ0rSrMilWuF5JsgIjy/Obkud6mXN0/jQ9l2G68PG5HucccoZKSZSkYRXYV005+pWHkF6d\ngWKG3lKRN197I6uMhMG+HhrP76HabiHtEh2lo2s26DZCGgip0xWeAKlinIxBLmvjtpqEvo9jGlh0\nuZ8qDhFKR9N0UgkXbN2GlbEJgoADLx3A8wJUPWDf1Cmcj/wSgxsu5+6//e6rntnXLFiWJk8yONpP\n2HJpVltki0VWb9xIrdNCpQnFgT6iOETTU+J6jUwxS2epw+mzC8wtLOAGMfOzi7RDj0AI3CShlTYo\n9BZRkWBka4npIzO4RsyZ1gJSxgil8LSIZbdKYmnMNT02j43Rp/cQtTyMjI4fBhhSY92WRjezfAD6\n67tYXjhEdk2GUn+ZM9UOc+2QUhBTzGYR+TzxUpVcrsjJ0ENTFkHQQERZZmstVtsmSXEj4VrJ8swp\ndMegd3iA2XqFgTVrmZ+ZJO/kSSOPQpSQsWzSyEBzl4nMAp6pM9pXJm4soYwSeWnRjjxSLUVKRbtV\nIwl8jLLAlRLR8LkkG7FzZAN+2CaOPZSf4C17ZHSYmTrJhtLFtBuqm1k8kJpLtbGAkCkGGRKSLny8\nklm6Oy3y/HwFJFJ20bFusKgVe/V0RU6sO9HXNK1rjZd6CF0hE4hj0Ap9TMYpzY5G2xnimJdyevoM\n61avZvHMJGGsIS0HodtdpftIgRIomaJpXd8XXU9pNqpIIrKOQeInhJ0WhiHJWBkCP0KzLNZs3oKR\nsanMz3P27Bxh4JEAeZGy66ab2L/3ea697kru++qXXvXMvmbQcauZ8MILL3BgYoLxZp2jy8u8ODvN\nVOhzarFFVVi4fSW0tZvwCmVEoUxkd+G9jF1mvuPS6rFYNl1OhcskeZN8uY9TM2fpHevhxMxJFIKg\nUUdTAUGnTZp0mFuYYvHsNNs3bmCpUWeuVWX/wjgzUZWWiHHjgHwhoCiqMAnpNo2kM4bpdUGD03rA\nruF1LJiKHxwYZ7bTJPRcCoYJekh2oJeKCoikIHA9NMuglCS0jxxkuT7PyMAqyoP9JColqjXRUoEx\nUkYZGa74zF9RyfRhz7kkwqMw2IdMJTdccytnGw2SKEZHI5fJk9VzZHxIhMLPm2R0iUxjGrqkp13n\n/h9/J72VOVqtJVSQYKQpsd8m9FysUoE4V0A5fefLMClc2s1uHaIQKCkRmkTJrkCFpmlIKVAqAWKE\neHk4qes6UnZdy4QQXSES0ZWOPUfGlMIgjWISkRDLhCiI0WJFHEb4aUJs2bQ0g/XbLiZnFQiiBENo\nmOgoZWAKSUYXZHQNR9PQkxi/3UYkKWkQE3o+SRpi2Dq5bAY39LH7Brhgx6WkwMnDxzgzNU07cPHC\nABuJLiV3vvUtPP/iS5w4dQZT/F+6KUkUsvOyK3DKZTw3xEjBVyEDm7ZSH58gSSSWAq/SIUybHDkx\nTuqH1LwOSwsV6iKinQFRsOkEdSqdCloEzWab+T37Wbd5PZqSWJZFGkUEvk8ninVmP2gAACAASURB\nVNh88YW0jvucmjzOdddfwpnTk0hp4qUerdYSWc2gd/hUV970Cliqj/DSvjMMjg1Ttw3KF1/CWs3i\nhrEiD56a4FDVZTgyyHspxmyFVfk+3OWzOAnIJMUulljyO+TCGF+IrkaairD7imRsEy9UxEYZrzjK\nfGE9V/3UXTz6F18gXKyxfugCnvPP0pidQTgFGm1Jbz4HQsc0ffKaiTIFk4vLFI0SQ6aO8udQpo6h\nm+Q0gS40MDTSWGHnshSFIJ/ro3Z8nGKvfz5YNOkipUGagJIvy8DJlUZfrcCzL18KIbp9yrmSq1ui\npYj/CXiVgEol59deVhgDADfdcAN9AwOMDo4hDItHv/c4qwZGyGQ6LNeapJqDFBLDshBJB1K3K7RI\njKZJAs/H0rq9Usax0TSNlu/ROzLG6g1bmZqaYurMma4IvEqRKqXftlASzs5Nsmb9CG9565v49J98\ngSB+dUXK1yyzrF8zigTcxRpxrUZzbo6wXieqVHDdJrHuEzTbLM9XOHXsJLXlJpWlGpXmIktJi6rX\nYGG5wsTZBQIvpO41cGOPZr1Jp+Ezf3IBGaZ0Gh3qtSb1dodKrcFz+/eiZQze9LbdNFoVDEsnTWM0\nXXH7rmsxtZTtO314GrgWqktrye2fY/OFm4mFQpRLPPrM06hDRxmRivzqYRbTFDVSwjctypvWMW3r\nzMURS3HAYuoShhFODGkYkxnsQaUJKQJTN5Cxy1C7RaNylNHThzh87z8wnKTYmqSlJfRlDDpnTpI0\nXUbWbaOTCDoqwU8j2ipEapLUV9RFyrrRzahEQyiIkogMEpQgSCIaUYu03SGptcgqgRX4pLE437Po\n0mO5XkOTAl3vcsLOy7WqlzcIX6nqoq1knHPZw7KsrmPbK3ZdpHz5sZTdH1S3sDNMk8suv4KMafPc\nC3tYWFziJ3/iPXzw/e9HVxFB6IEE01ToeoSKG5C00WVAGPmEoY8SCVKHQiGHIqHd9igPjaJncuzZ\nu4fxEye6mgKkyDih13ZYNzpKvVXj3e/9Ce657z7Wb1rP1MTU/26r+LXLLM3GMpq0qdbbECf09ZZZ\nWlpi7uxZ5GCBtGwzfWqOg0cOUllcRAFBEDDbqDHfqiJti56+fmq1Fl670f02jCOiOIVEUsr1YCQm\nIoqIUoGQOkmgWKxVKOtZHn3oASoVl/6eXrykw5YL17E4N8mmoV6Kvd/tBsuvwcETFr0L85x1fWbn\np9ASn6hS5YZsiaEwxtw6zMGpBeKZChMlk4t6VqEGxsm3AzKBT7NooXSDwTTDvB5hDZTJLNeo1ZYx\nTBNXS+gPwZYh87//G+gyJPQ7ZG0TreUhUp0eBFYhz/yp49iD/eilDLFQUMzSXKpywcAIZ2pLnFia\nor+3h9T30VLQIoVuKtIVSaNcrNDDEL/VwtFs5oLay5lF82g1GkjdIE67Si5qZXj7ykwhVrhfauXA\nn8sqSdIVretKvP7LfRgpX7lR+TKXzPM8+gYGmTgyzro165ivLrPvxb3UWg02blxP70AvL+0/ht3f\nS+RVsWSK1B08zyOKU1JiTNOkXCrQ7rRRcYLpZKg3XOrNCkngEa2QQkGR1QS5osPeicM0g5R3vP3d\nPPXUP/Pg/d/H0jMkvPry12uWWcL+tcwsNxjbso7NN11L2pNh/euvZPTai7FX5UmMAHuNzZYbL6a8\ndZBf+fh/Zviqtex6600MO2W0jmJ+fo6lSoXGfI2wE5PEgiBIsHMlEqGYixqMN2ZpRQ2Eimh3OoyM\njiJSi+efOUbchsWFGko3OHTqOEZ/L4enxxnorcAe4GoY35+jd6lB9ugUNzYk6zsR/bqJ3vawZULj\nif2svmUnfUWTDUJw9qEHGK4sstSpM9Xx8ZYj9EKBlpmS13VaUUrQU0aaOraRo9QA31CkUUrQcSm2\nOsxLRTnNopsSgpCCYaDqVQZNh1KuiNsO8L2QNNUIMhbj9Vmu0Er0Tc8T5mzMKCUQMSEhUzLtGp6K\nlFgltA2D05HH3qhJZTLoZhYPNFxKWafrE0myIkAh0YREItC1rt4xSp1fQxZ0AyCO45XASLDsrtyr\nUglCvAIRY4U+o+kIoaGURhDFzM0voUSC57U4evQopd4yd9zxRq67/Aqk5zNSyqC5dWzDIEKj2qgT\nJjFxmpCRgpxlUKs3afkRgZah1gxZWlzC7bQJ4nOfM6Ug4MLNG5lZXiJKNOrNZZycze7db+Tur/49\nkXDx+b/UU1KGDaZnTtGX0/jB/Q8Q9OdIJkzqHZflyjJ9g12r59mjC2iWxncf/w7DY0VOn55h7ZYh\nvMOn6FSbZJ0Sy60OmiYgihkuDaCkoFWpEHsBbTcm0EMypka908Br+vzlX/8Fb3/nO/H8BqVckdGx\nQSYnjnH1bTch7YNoBxSshxk3g1qQrB9ZxfzJCbzeIka5n9bqtaQLiwyXcljNGpN7j1MUJt78IiVl\nUkVjQShqvXlealUpzjTIK+gIyZm5BbZvvIiFVpswk2OREBkpxoDUkCSpRlMKWmlEmIYsixA76OBn\nTYr9ZTqNGmm7hu5k6MTdnRMj0Znwa1w2tgE1MkRcrZEJLE6qEF2zmGt0KMcJPf1DOJbNxi0Xcnz2\nFKv7HRIm0KRCJorFyjRSSgxdEEcKeb5Hkf+Chv8yXX9lYCk19FcIWkgJ6fnK7V9uW57brjRNiedF\n+H6KZWaJfY877riDam2ORx99mIWFBW66+Qbu+/vvML+4gJNz6PgB0F0atCwD3dTxfB83iImUxA+6\nWtFx3JWLUqKbDaRSjI6NcvTUKRJdcmZihge+9tfc/tY7+ME/7UHZo0hrEBV7wOy/fWb/fzv9/x8v\nt9Egly/z/N6XWOhUGZ+d4tvffpgXnt6HEhnuvec7zE4tcfTEGQ4dHedv7/kWLTdi1dqN9GwY5Pa3\nv55Y02g16pg5i0QFZB2DKGyhpSE7b7yEN77jRmICGq0Wy/UGw2MDDI/1c/z0US68eAuf/OPfxg/q\nJKnP4Ogov/2pP8BLnjnfr0zMDzMUaCT1DkY5T1LM0p5bZK5ZpV5t0Jmv0u+lBDNLyA2rOKWHHOpN\nedHyOaQ6hKUsGy68iHJ/P5uHRrl89Tp2rBrFDH1K/aM0/YCeTRspjKxCIYiJuz2GrrNspvRsWkfP\n6jUUevsYXr2WzNAgKuxQbLUx4hRPSlI0AlJm0xC5XCc8coqqbRHuuhlj+3bsUGLkMvT0j+AjqNVr\nTI0fw12oMD4/ReCKlxExPWSgp4+c5aBrEhCvYBmrFURMvtyz6GLlua5NxjkErNu7vFx6nVO2lOdR\ntS6sbJoWqK7Nn+8HVKvLfP/xf8IyHW64fhcTp6a56srryOdLdNohUujouoZhGCgl8KOEjhcSq64I\n4LmVgvOfL+2Oixw7w0KtRjOMOHj4MEePjrNqcJAXnn+OW267DS9uoJsGwvde9cy+ZsEyfmicE8cn\nmVuoM1VrMF93ydt9zIwv8MT3nmFsbAMvPH+AltvBMB1I8zzy8PMcOHiMu+/7FntPnqKWKJqNBhnH\nYmikj49+9MPYJVi7ZZAd127lzXft4qY3XgkiwpQBZytnkZbgyWeeICVk/OQhNmxcw9LSAs1WjaXF\nGhevX4QngBvg+cMmuVgi0pSznYD8mlGCpQp9vUUm9JRD0TInHZ+jGcUPY5ewUGSkZ4TbdlzD69Zu\n5uL1FzDQ24fX7iBbHvHcIiO2xeLyLGvWbaSvp5czk9MEcZfQqSmwhIGumURS44mnn2JucRGv4zM7\nc5aJmWnK+Sxre3pI/YBOHGObWWJNQ5Vy7JEVUhljB5JLrn8T+06dwPRc0qBDc6lC1GyQCyPyYcCQ\nYxDGbWJfdkuxDihRJwoCMrpNPp+jUMyfb8zPXWmadqWSVm69cif/3GPDMP4Vbf/cde6eYRjouoZl\n2mQyOeyMg27o3PnWn2BhvsX4iWluvvkNTE9PsfuOt2Ca5srr5XnQoNXxiFKIku6cB/FyXyQBQ9PJ\nmBaGZiB1k+/94El0M8O+gwcxyFAu9pOmHh987y188g8+wP33/Mm/+ryvvF41WIQQthDiOSHEfiHE\nESHE/1i53yOEeEwIcUII8agQovSK1/y6EGJcCHFMCHH7v/XeodugN2+RzzkMlnrxF5apN+rEpqRa\nr9FqNujtKRErl2xOp9Dr4IUtnv2n5xjIjiACQeK2yZgWuq74mfftxhppsvs9b6B3XR+5fMIPnnuM\nn37fexnbOExuKM9dv/AGshmL3/zQr3H11TvQMpIdl1zOzh0XoWKoLcxz0bomPAncAsXC63GGshys\n11hOXOaXl5j225woaey1Qcv1wc6t3LT5Yi5atYHtN1yHJmKaLZfleo2lEyeoVpaYbTWYczt4fhux\nMMsqIYimp7DaTdaYJp2wjWnYSGVTtwR5H5ZSn5xRQMSK5U6HfBAST4+jt5axHBs7Y+DFMZlSH6a0\nSF2fyU5CZyRPLCLu/6OPkY087JLFgDQxpIahdfuGIIkRskBDA38+A9uAf4bX76ogpQ1agohC9CRg\nsOxQzOkkaXenRWoaUqQYEjS6fU26slUqdW1FMznCMLtN/Cub+lcuj8VxiqbpDPbnGT89TjtMeOCB\nh3H9Brvf+mN0vAaPPfFdbrplF1EUct211yJJCWKPUIV4SbBiIqGhCYkuu0KB5zKfoWuYmqLc14uP\n4Mqrb+GB+x5AVzEbegtU/UVOHjzEJz78MXJzFY7+/bd45Atf/PcHi1LKB25WSu0ALqErDn4dP6JF\nHkBzqU5zqYG/3ES4EWM9Q2SERdYuIJTNqZNTFItl+nsGOXJonOnTc0ydnkOXDp1Wyr69R8gVewhN\nqLfaHDlxhly+QMNd5KorrmL9qo2YRg9/9bW/Zf2OtfSuK/Hc80dZWK6z58BeNmxYz+J8nWPjR9m+\n/WLu3H0H116mY+4DNsBUaHHoyCKZVKPvkq2Mmwmtusugnud1F17EzhuvYdXatYwUe2m0axza+zwy\njqjNLTI/MUncaBAv1zA7PlmnQKhpCMNApmA1m5hnZxg2dGzdJB8ktNIEteMqZsb6cHSNDamOSANk\nHDFNlchr8jqVoTq/zOxiBdKIjAZL8zPoAlIhsDIWs7UGQQB2JyDwGjS8NpECpWsopQikotps0AlC\nik4P+77ZB78I/Bm88+2zJGlKsafA0KoxpGmRCoFh2UgBGdvGsTPknCzFfB4NgSFkl9mLJPS7lHgh\nBJZlr2QP/V/QZLp7ZF1TJNd1CYOAQqFAs9lk165bkFJy5MgRrr/+eqIw5aX9x9ix/UriUONDH/hl\nMrpD5CVI9PMl3cpZ7QaJYZDJOKxet4bPf/mzxCrhs5//PJdddSmWOcDnP/VV2o1JVpf6WXpuP/2t\nGKfqkql2EM32vz9YVj7EuUmNSdfZpkbXIu/ulft3A29d+f28RZ5SahI4Z5H3r64wSfHCgNjUmQld\nJpcWOVOpsFhtYBg6pqFz843X8a633YGT0ZBS0VPo+rckSdT9R3shhqXTky2QkzmK1jCnZ2Y4PnmA\nKA7Bh//ygY/QN1DkpcMHaTY8ZGziRjEkNqODQ5imxK0GXH7xZfzKL2yDx4DboLK0mUsvvphksoqm\n2/S9bgcvNSrMNGtUpqYp5Iu0m01ap+eonj5DcnqaTCvETiRZx8QyDRxNIoOIfCYHpkGCxItSlJXB\n0BReHFMNQ3KJpJ7V6fv5D7Drzp9CpQqHhKJh4mkQ6CYNoaiSEBs2UurIlkvarCGjAA2FkpI0CBmf\nn6GR0cAGzZIoqXAltMMI0zTRHItMzmH9RRfQP7KKqf2XE+4yYBbGFjxueb2iubREMVcGdKJY4fsh\nRSeHTLuSVBnLQpOyCyunCkPTsQwdkSp0qZ1Hyrp0/pdh5nPZpcspM3CcLNValbm5WdasWcOLL7zI\nt775TY4cPszePXu46uorCUKXY8cP0dtXYM+Lz/NL/+E/krWzqFStIGtdfloul6Pc08OqVavYvGUT\nd/34XVx7/c387n//BA8++CRTU3XWrhsl9Ov0FPr5wd88gNloEy0vcnZxCalSMhn7RwsWIYQUQuyn\na4X3hFLqMK9ukTfzipf/Ly3yAOqtDs0oZMatM9FcwstoWLaF8APQUu58725kMeTW22/hwW99izfd\ndjuZjE6pN0cua7J563rcuEnetNBNna/d8zdYWYtdt7yeN+zaxcPfvw8vqNOiyYv7fsj99z7C63dv\nZ9X6IvuPPcmZs8eoVZe5eONFfPqzn+Uv7/kqvvtQN1huhYe/t8zBffsxay5zk6fI5QpsvOIq3JzF\n/IFxMkoQzldpN+usMfKsLfZRyHWNYPNCUM6Vuor3YUhfqZcl36XlB3iui8haSLvAqq0XMotPM2zR\nGyVc0Ih4/LNfIdE1Qk1gSR0jShhMbHryvTRNg7xSpJ0WuucyiMJMU9qRj65p2NkisabTbjVpyhQ/\nCImiGKlJpGWjazqaadJstDl+7AR7D+3lBz98nqe+Uepml8/Am982iabbTExOEMYxiRJknAKObaJU\njK4LNCkw9W6A2LZFwbLZuGoNYwNDGJpGlITEIiVfLICmd20QSLoSfJEPiYelh5TzGqQB9aV5Tp04\nyKWXbmHVcJmevM383DQv/vAptm5dg50RHJ88wZt234Fj2uzcsYOeQh5EimEY5PN5CoUCG9avp1Qq\nIzE5fmKCk6dnMNFYv+4CLtx5OS/+8/fYtWsHowMbeP7pp3nx+CGeOHaIQ6dPUYk9NOdHVKRU3fHt\nDiFEEXhECHHz//S8Ev+SB/Gv3uJ/dfPRZw+CJvHTGLOg4wwUMYWOsjXe/f73cGr5ONXA5I8+8/vc\ncusu3vXuOwh1nxPjZzl55Bij61az/YpL6C0WaLrLXLN1I5/9yz9k68WX89k//yJXXbOVmUmfb/7D\nPzK2aj0/PPXPPPa9Z2nVTPa+tJcPfvBXefQ7/8jOay/ld3/nN4hYYn2+AwcguRp2b/oE/oIg2Xs/\n2cpZJmZmiD1B2HIpSRCOQZKAMiUly6aqSeYXKmR0ExHGuFKSdprUhM7A8GrCfIaiodGXNTBH+jnS\nCEgT6JAwpVxWx4LHfveXWZW2mDUVwovI6Sm9UuIkCqfWJpEChUAkJpoQZIUASwfLQsYQS4FeHMDq\n+BxyXQbsApaQ5GKNOPIRrZg0kAxIjSHNYtmyWW9oVP6mQHrfInITXP+JNuu2WIyfjMlZ+ZWBY0Sk\nEuJErcC+Okkao5kGQRgy3DdAo17HyeZI0gQtFIgoIKcC1o/102NZZE0b27QIQw9NJZDEQMzk499k\nVCnSiUlOnXqWYnGY2WqL7RdvZ/0FFzC6fi25fJEzU3M8+MAD6JZGYbDA23beyZHDJzh+/CQZO4vj\n5LoKLvU6b3rTndxy+y1knR6efvIfCDJFwiN1solPGLl87APvY+PYGPuPHELYJqEUPDVxGk6P/2jB\n8oqgaAghHgJ28iNa5AHcfOkWMrZNtdPCk4Jjc7O4qUa+r8htt9+A+VJ3u3DXh67kySef49jkUerM\ncOzkIW655QaOnj5BYaCIKCjq1WVanmTjuo08+K2Huevdb+eev7uXU4cr5JwCb7rjDoazZf7rf/k4\nv/tbHwe9j/vveZS2t8zP7f55Pv+N/4EjfsCNWeAqODKd470/+RF+9rpb2d3TQys6i6y1yJo2VrlA\n2mmweGySIKcz5ClqkUczhdn5eWTOIRMm5Ab6MCxBnC9SXDeGmj5JtqdA7HbVGRfrddx2h3I7Reo6\nbuKD5pGkBqkfsVrL4CQxWRLMWBGnAkNKyqnAJ6JhKfw4hUhHej62MpDEFLIO+9t1ooxFoNoYiY4X\nuvT19EC1jiUsDDTcyjRJGFBTJqUJi1N782x6Zwu+DG952ySf/qNVQIJKFAhFnIJlOyRpSrzi2UIK\ntpOnFngYOiRBh4IhWV0cpGTrmEnXwUymMWnSIPElBTPTLVHtLHEcYJo2cZhSa7UoFnp5z6/+Otsu\nv5xEM4hQ6EnKsQOHufzSbbznZ36STMlBaBopgjhUfOQjH2XPnn1omiTVTXpGBhjd0MPQSJZHH3wQ\nldFZu6qM/9zz7L7rx/CExxsu2cGDzzzDHTt2UjQEvhXTiXwMS+P7x/6d/ixCiD4gVkrVhRAZ4Da6\n9p0/kkUeQEBKO+jgmoqFWhN0C98L6Mtn+frD36DSmqY23SF3x5t5+83vZbxyGg5n2HJRTLaQ4V13\n3cFzB/cwM71AbbbNoSWPky8tYOcz3Pu1++grbuakv4iXtMk5EcWi4NFHHuG//9ff4MDk83zmE/cS\nBW1+9b/9J971jnfQWfg6fKf7Fz73fA6tpVh1skOQxEw6El/T8aVJPVgmDn1yi4v4wxnqZ5pEvRkc\ny8Eu5RlYtQoxeZaak8FUIdO+YnFmkUqzypHFKgVT0h9J1ndCZlIXTNlFmYRNIBXIGBlqCGK0RIHo\nNrK6adJJIgbf/R4O/eNDZDothBF1B4MxdCyJb5jIIKZj6yRAKCUi6VoRYlgk0kRTJkmS0kg6yMFh\nJqKIpudRuHuATb/WgjfAO47VuPsra+i4JrFKkehErEzkhUClEZCSJooUgUwgb2foz1jkiBEoojBE\nL5TQDAu1skqcoFj2A4SXorkdDF0j9j2CMAGrwM//4se44KprSbqNDqYQKCnYtuMStmzaxNLCNH5k\nkS/1oltZpKb4s898mr17XuLDv/wRrt+5k2LR4fiBl7jrnXfyM+9/F9VaG7PZ5pg/T2wF/OCLf8uw\nnuUd11xB4rkokVIyLEQcI+JX1w3732WWYeDuFURL0jVgfVwIsQ/4hhDi54FJ4F3QtcgTQpyzyIv5\nNyzyADKOjUpjmvUWaRITBC5pHLG8WME0d7Bt+zamC0t85ut/w5urCzTbEROVRXoHTfpHCoRhwI2X\nX8ex4ktIXyCDlDBoM1Du4+ihM5x+8cWu2aaj8dS+F7jk0ksYGR7m8OwLfO2+v+Nzn/g0n/7yH3Hm\nzAT7Dx7gva/34JeAe+DJzytKPb1c3rOG6X37WHPppRiDJeLpRbzEZzyuUQgjBre/jiF1lqDHolBP\nmDFjnHKJ2Wde4jhtMu2AKjF2FDIgLGwzZWsgEaSISJDP5jhrpEwFErM8gLY4ThgJYsMmR0is64Sx\nwFIproq54t3vYv6Sm7ioXOTEl7+EFuiYwuOsZnLFh38VM3B57HNfIDR1HDRSNExSIpkyl4ToCtww\nJDE0NMumnM1hFfNYizWWT+ep5ubp2dzBfCjmTW9Z5p6v9qGQ3UySxF3HrzRF0w2EUF0ULlFkMwll\nS6IrRS0SmKZNb18PzWYdEQY0Gx2QKYi0OydRCkNKzFTHTTtkzSx33Pl2Nm/aRGVmlqGR4a5h0gpL\nWZGSmhqprhP7MbWFRRJVwSn3gYC+vh6+8udf4MD+fTiGxtatP0ZraYlSXy+F3hzZoknPxWvIWQ69\nVh5baFSbVY7PnsHrhOzcvh0VRSjtRwgWpdRB4F8pjymlqvwIFnkAepxQsCycVANdJ84YxArcdp3G\ncpUjp8+y64bbmD1Z57uP/ID3/cJPUunU2Dy6jsu3X8Zv/bff5pfe/4tct/V27v/277Fq1RiLs7OE\nYUzg+xi64s53vZF/euE5Jien+IuvfY2xoTJ2j4UIDf7wc3/KwkKNW2+/Ca9zmAE3hCp4WzSOHtbY\nmMkye/AgvbrO/mef5XUf+nke//7TCE1x0+23MzW7xLFnX2TY6OXM9Cx+pc7BpE0qJW5jgXLWYBSD\nNX5KLVYMlPppLs4hkSyJmELWxnRsCoUyW3fewtCmy9n/6Y+QMt6116aAGemkwkXFCbbUePLebzOs\nl6kYZTa/78O0dB0z1RhwBYtVi75mFWv1KGPrS7z4xDNd8mgagW5QKJZwMnmySifJGBCEK1CrjkoT\nIiU5+dh2rvjlZ+CT8LPf9Xji4T7aboCfxoQRhGFILpddoYgJ9DjANgzGijlyxERRQDNJCNOYTqVC\n5LvEcYTUjC6tXuuylnXdYN3adczOTBOEHT72X/8zA2Nj5B2HgtSpLyyQSLBtB8vJIlBYhsXA4DCG\nSImSBKnpSDPT9cHM5wHYcuE6Yi8AvUvsFKaJihRK0zD6BhjadBF3/cFmFmfmeOwfvo2+1KSUtRl7\n448x8eILuOP7XvXMvnYie0qQeiFlJ0eP7WBJwUBfEcsy2bf3AFpgceLFU1Qmljg7vcxTzzxHY2GW\nxx77Pnd/7W8Z6B/mG/d9g9Kg5PrLdvCm22/EKphMnV4gb9t86e4vcsFFWzCMmGwujy5Nbr3hJkaH\nBnnPu3+an33bz7F69Ua+8537ufmqFL4H3AKHTvRw5c4rWS8z5DoeZrNJf5zwg69/HcPt0KpX6cxW\nWD49yRVaLzNLC4xUWqz1U64UDjsSk8tyvWxrCxJ8HEsnDlr0rxqi7Uiigk1fYtCrW4i2R7tWIaou\nkPPrWGkHPeknCTOse9/vMfaR36IjFKGuI0RKD4JCtUkxN8LRMy0sLYfrCzqFEk5e8Mgzj+DnNTpZ\nRWSB0iSlUg8500EFAaHXot2qU63M05ibJapVSV0Xr93Am53nqb+2aV0PzED++Dxv2A1SKBzLxDRN\nSqUSzWaT0PNI4xgpNXqyIUUjIVEhiVQIkeJHAW7YIVIhURqCVNgZC0OCY2uUB/q5aOeVIBVrx0aw\ncwZ9Q/2kgUeqKyxTJ2caqMCnubjI/MwMM6cnCIKIdqzQ7CyaaXOORiCkXLHt0+i4blfAwsmilCKj\na0RRwNCadWBb6MODDF1+Ce/5nd/k4ltfz6SrOF6P+bmPfwK3Z+hVz+xrFix+EuHHIYHnoXyPkpMj\nn3UYGOzhtl2vZ356mQN7XmJ2cgInZzExMc3EsSk0Suzfd4JmPeXMVJNPffkbfPPeR/i93/wEOXMQ\nQ2bJmxZPPfNdHnviYb7yp3/BwEAvN9x8NZ/848/QarrceuU13HTdZdx08zauv+FyxoZOnp+vPPpE\nzMG9e7hkbDNxNSArcxRFgUFNQOQy1NNLc2Eed3EWe2aOmr/ISJSySVgMGhQ2KQAAIABJREFUBinL\nS3PMagnCzpA6WXrtAsLziBsN0kab1AuJVUwnSajrKQu2xvQLT3DmM78PnkfD1FC4ZAsDREtzJImJ\nrws8GVOTMdrW11GvThJXjnF07z/T8Kp0vAQqi4RnJykeOMnonnFu6h/C1HUCz0MkEXGzg5MKdNcn\nh05eN9DjrnKlCkOGM+D4Bi/+4xj8B+BL8P4POZRLeRzTwrZt4jhmcHAQx85gmRZKKvot+P0v/xmf\nffg73PiOtyB1jcT10KKYkXIfRcvCzJi4gYtjmdimzuLyMlahiC4EUwuzjG7bypG5M3gqQnMMnP4i\nViGLZupIBXk7y3D/APlCATub7VYiKzpN5+gzQghiT2EmNrXZBrWFOpWZs8wcPc7pA0eonjlLZ6GC\nUilGDFLovP3XPkz/RVvIRxoyDXh0OnrVM/uaBYsXKaJkhYCnNEi6Huydus+D33yIWqVGu9Uil7MI\nl5cQbZ2rbroK3ckyd7rOsecmWZ7osOeHe8mZOTLCYfLQCfKahZPv5Zlj+1i1ZYQ7f/od5JwybnWR\nu7/4dYYHx/jiN7/E//OpD7P7tt1sHZtiOHu0y0O4DQ4dyBEFAb07LkEbHqDRajGX1VB2gXapH3to\nlPZMhUvtIqkecfuuN9C/eSvFjesYHRulGcTkiyb1zhJZL0DVWph+QhAleJrN3tTlsIg5mIMzXoDR\nM8zqkVHCsElLU2iRS1baHPnUzzLztU9jaQGhSvC0gFU37abpRgjhkNl2HbkNOzHMIqG7xJMPfh1D\nBDi9WUwlWbNxDEOzSNKUXKyIIh+hCUxNYaUx0pK4XoMojDCETcNzsS0b/+A1cCfwXRDiCd7zoZ+j\nxzQp2Xkcy0ZLIpyMSRh5lLKCDat7cPIdQhnyzvd/mKGhke43vqahpCBBEocJ0nQwBAwbCiFthoaG\nsGRKyczxZx//U7ZdcCnCNJFCB6WjdB27mKcwUiY/XEYWDBKRop9nDHTXnaE79IyiiCjwiWVCHAf4\n7TZhs0PqNRgcHcTp66XQ1wtSEho6qdJQsYtZkAgke/ee4JqrL33VM/uaUfSjKOmiHkIjVQozm8MK\nXaQmqEYe5XIvbrWJ2/QZGRnk4ItHSKTP/OwijmlCpJAqREU6sZ5w0TUX0gzrJM2Ivfv3YTgaO7Zd\ngKmDbQs2btvMPz7zMDu2bOWS9Zfy8T/8Ew7t/wU+eM3zsAu4CSZTk5OndCwh2X/vQ1yU0ZnUfaw4\nJlMuM7plI3ue/SF5dLKxxA1aPPoPD7JrzUYKzRbC87Adkx5rgFR3WJIabRkxqSWgYoLBPqIo4ILe\nYZy8w1LBZ7LtkhkcoGZKZCS6U3IBIjVIhY4rXZzIJGM6aPr/y9x7B1l2VYf6394n3nz7dg7Tk2c0\nmpE0kpBQlkAiCBDYYJGTcQJjA8bhZ8CGR7IBG2QQwc+Bn22wjQy2iEKAJBSRUBxJM6PJPaFzuH3j\nyXvv98dtCWyD6tVzueRT1VXdXbdvh7tW77PXXuv7NI9++VPk0hgtNVp4yKhNU1iYbIVS3qHVDXAC\nzajOk+9ECJWC45OkGSmSIArxbBvfLeJTxCoNYMahG4ToKOHBr81x+Xt98iJCHKpz1VVD3PrVPhrN\nLkq5mAw8Zajm8rhCMbPQ4cuf+CtU3wSv/s13cuUvXcPBj30CWwpq1QIrSzNkVrWnnnAMnjRUSmUW\nZ+dwbRtjLJJOwIff/34++KcfXmtbWevvlwLRE68jRC+onzy1+49Nmq7j4FZ6trK+/hroHjvs/rtu\nZf3pp2GcPEb0sLOGbG2kI+YjH3kfP370CFMzs1x77cv52Ef++OfG7DOWLFGSYSwJvk9S9TgVNvGr\nRRxtcAJFq9mltdjC8nzmTq1gxS7dhQ5nbdnCKWeR5aU6+ZJPa6WBcgTz8yeoTQ7ilQqYVNBXq7Ln\n3gOI1GJxeZGbvnMrh4+c4sBpO3jp5a9i4eQcV5zxAGwA/hDmXi35kz8apur4jISKrY0OpfFRFpc0\n24Th5N7Hqe7eRcH36Pd9gnaHNE0pCodTcZsRS9K2bKbDgAVfc6K7QuIVcQfy1OwC7vpxJopFDu3b\nR7fdQocdvHyOUmOV0LWoC0FiFFIrhOOCyRDGIpOaunTIZwGzt36LzbHBRbPi+VS27uDEY7dRdmxC\nu0dd7KiMFIu5hx/g8gvP4cGH7iVLMuycT3VslI7KKFQHQAjsCHKFEq2gjc759BVKDG7bRGdOkH/e\nj+EWkG+9jWte9Wq+/q3bGLXhkQcfYGhkgPmFaXTXcOexaR44uEitf4Db736EN7/z7RjLI0Ew32gR\nGYlWhlLOQxJhVMa69ROUiyUsIUlTRalQJHN9/vWGr/DKN7zpJxRMoNdhtTY7w3+uVpmfuhWDNWZA\npjBaMTMzxRnnno10PdI1jpjBIJXg2IO3sf873+Liq6/kknMuYHnjGJaTe9qYfcaSJU4zUCCNRbcR\n4foWjfkljNak0lBfamKlAs+zCGOFLTwWjy9QdmwWp+eZ3LAOJRM8u4wnHZpRSNKMCE2ClXdY6tZZ\nPrbC6Ogwh/ZOsboU8LZffwefvf46br7xFnKuhbTpdbr9Drz5FevJWhYX797OWYs2g/dOoWZnuOS0\n01k8cIxRIUgbTdpTR+m3Xdo6o+La9GmH6VaThpMjEZrVvMvWnVtZpwyVYh+NNMXu1FmYn6NvfIJs\nZYUgESRJh8LYEIPGsBxFdHIuSgsqwsKono3LkpLTXvA29NgWVr7+t5QX5mlaisxyKF38HJx1W2gf\nfRShmuQLVWTaIdUaZdu0U8PNd9/C5MZ1+C1BF0GQpgRK0Zib70EzjKAetylWKhxbXKajUvLlMg/d\n4nD1VcANYN7wLYLpjdQqw2BnnLXrbO6773b6B8os15t0ZI7lTLK00MCZX+HeX3k71hoL+eRyE8e2\nGRsqYlu9FUNKRRJ1yOVKiCzFc12MVqhUUyzV0GtJ8aQ8SRowQv6MNOld/26FMRotLIQUaG1YmDvJ\n2MZJjLSRWqCF6fEJtGHq+zczkobs+ee/Z9vSHCPPvgzKE08bs8/cpKRlYVuSTCd4nsdi0KLTCigU\nynRWO/gih+UZklYdbQEiZKgyQBIabKvI0myTOOlQLRZxKxlpGEK7hLZS+ko1Vo81oAvzJ+aoVovU\n5xa5/lOf5qKLLuJNb3wtb33HrxJkgrxvoA2/+drX8tUvf4tqrcSBHzxE3tj0KZvZY1O4hQIjGyf5\n4UN72GoVsVWANIay0uSVYrRSYWDHRor5HG5jlczyScpFomPTLBZ98vNLtCxBog2ZMthCMpqroo3N\nahqyHIeMbNvKyftnEZYLmaFjW/RpTTtapdpYJVg6icCghEFJydSP70Td/QMKMkJ6OewkQFk2jiNI\ntcb2HCy7yOqpORynQBTDilTIKKakbcqWhWdprM4q/W6BmUSx3nXplz4Hb8q4+u+Bt4GwfswnP/cJ\nFrsOOo1IVYxtSVbrLaRXQGaGTCdEUpFiiI0NGGzL7uFVHYdjC4uctWEUS9ggUsLmCvc98lhvolIF\nqCxjKWwyPD6BEZIelUyscf5+1nryMy6jESRk+IhMszRznHWTo6Q42FriCE2MJCWDTpc0bLGqEpSX\n47E7bsNyNCNXvPBpv8Uzt7IkGYklek6/ekZR+OR8izhOKVsaXwgGijWs/hpBGJJzXfr7aywkbdIw\nxsnlKdkediugYXwmd20gtQKChiJoNOi0AzIR4SY2aZQwMVbjd9/1W3zmf3+Wr33H4rQzzySOj5Ov\ndaAOS+k+Xvnmq7nxU1/h4uJOWo15cp6HEjZxqFln59Ck5KwcWWLjFMo4QYhWCWFzlajeh5UmyHbE\n1I8ewLdAtTtYZOSlQyHTlKVPw8nT6HYYLPjE2tBIEhYXOmw8+wIOuUN0pMFJu1hZRmZs2nd8i9Ww\ng2/nCJwiIgvI0g6uiQikTd/wBnJZiBEKy3dwhU0sBBpBtxkRpE3cTGHZhkazxdjQCHGjjePmiJOM\nwMpoCk3HV2R9/RT8MusaNkc7Dps3pHiPw6YdTVbvy5OomLwDoRLIQhWF1VNTJBGOlVKrVqkOjWPb\nNsv1ZeIwIo4jxvsHUEYyuG4Twckp8rZP2XM4kqRYWUI3Cpmfm6O9OIc2vYlI1igxGDBCIHs48qdd\nYTQSKcBIxcLMCXZfeCndRGF5AoNC4uBpxZc//D7spEPec/E8H8tIHrrjDtbH/0OrYXpttFRhcIxh\nzM8x6vusy+fYXupnfaWKo1NUGlFxbbwsY3FmmvnlVdAWWRCRE3myvMu5z9tNTEyu1EdHR/RP9nH1\ny67kkpedxfrTx4gCTb2b8cDex/jg+z/Kq1/2ClABfmEE+oE6pMeWOHTfE1xV3cFQCOUzt8K2TTRG\nBzk1VuGEpRizLESnw3JfhZf/1WdYyeWQ0lCWNtHCEuHcIu5SA9NsMTk2SqmvhCUEqTJYmeqVTitF\nUiFZCTssBQFB1GHdWB+n9jzK5i07OO/Fv4i2HIRIyIxBhW1Cr8AZb/5dVM0nswSRFmRCUj19B/sr\nhv2FhE6niT41x7C0ed3zr+ZtV76ID1/1AtZJG1dq8jpDhhkJhlYY0MEQeRahgKbJOKkkt59a4IfH\np9h/4jCP35XvHTvfAhdc2KSbJiRGESQZKstQWYolIE0z0ixGoyiW8jhub2Vprjao1+skSUK9u8rU\n4jK37zlEXB1iMc0oDVSwPRfbcpmZ7akG9/7gJvb/8HasNTpNxhqvWKW9JNC9w1CQaxt4+RPSjAGJ\nhdSG9vIitUqRTBYo+AUsIdEGNBo01CqS2+56kMUgJkXg5QsUK0NM3/fQ08bsM6eckBZC2kg0mdG9\ndgNbY3kClSginZF5kkYckQsSSm6eILMpeA4qb4jjDs04ZHjzMMudRaTjM3OixdJsl+0b+zFOxhlb\nzmbP3f8Mnk03EFxx6TXsP34YlTXZfc5OllYfYrIGrEDu4UWOfW+awkCRsy+9mvrMcep2zA8ff5Rd\nu87gjkfvY3eqsKVkfbXMzG23MxwmVI2DygTacoiSlIqSOGlCrVJmJuhSzZfQRXDaAftnTxHmCkjP\nR1gghMb3ffJByqpq4uZ97v7mPzIg81jao+vaaJXH3riFfccPE7QXyTKFbbt0BcyvzhEMlUiNxTWT\n2zm3OkSzmsM9Nk+hHWHCFq5tsHVvHxRZFqueRcOzkN0OsSvw8jkqrsfFV76Avffdj9dXZHOhzNLj\nwFWPw5/C+Z8OCD9eQRiDwCBEhqVV7zXTMdIGx3YQQtJYbVKvL9NptQFQAprtCG0kLVvQPdhFOjnW\nnzhMHCvySuEVPMZGN6PDjOl7HmFl73G8aoXa+Dhj27dSGhumow22ULhCoozACAnGwqLn505JwQJX\npMzNHGd8ch2YhFQLLPFkmcAiCRqUfIfS5BbCMGA+jamuG0TbFuVK+elj9r89K37OpdKeCTfKehST\nxLbIPEmGYjVokPeKpGlGkGr6q/1k7ZhcLo+QMathG993QWXEi6s0hI8q52ksrOCmNscOTFGuVdg8\nuYsoMPiuAT/lSzd+ie07NnHJxefzsU98mFdfNQG1U1CHS97xOu67/yYeb5/kiscOMhcv0LV9dlWH\nWKjPsl3bKKNwkRSmFzjxDzfQj0ViKTCGbquF6StRKeSoxRlHpmdopSlWt4VVreJKSXVilPXrtnLq\noXtxohjdavT2Oc2YNO9igoCJqEnLleREjK0iVKLQhw6xcnIW45aRsoFQhk5R0Cmk9PeXkdUyjxxZ\nYJ1yEd2AfGKYFyH3lDo8kcswKWSuwLgOHZ2y+awzaJ9cQDgWgRTUV+tkjx9htFxhauoUwmtTUiOo\n9z+O9TCcuSlmoJxQb/YGuzKToeOQfM7tlbANRFGIY7tMTc+RJRHwpJPSgNFYdk8TsdII8HzF7T/8\nIeO+jYOgubhIqgRWsUot1bSbTbLlFeaOHmfq7jsYGKty1mXn44yMIws5MmkjsFFC4aaGRr0Fto3r\neDQ7K6zMzbF+/WaCuQXKE6NPUTK1ELQWj7LaiGnl+hkuhRhjmFrqsH7CJrGdp43ZZyxZEq1Radaz\nRMUZIT1/iCVcEmxazTYq620UF5oN/AyKdg4jE1yrZ/kySuF0MurNZaqTo4x4NTpZl9ZcnbO3n8FX\nv/IVan0FtmxaRz1cZXpmiv1P7OG2W79PrVKj07iPAkAbDh38Nhd5fZxIHCrLK+wql/l+VCcUKecs\nCfwgwbYkfcKiGmsyYZOaDGF6YDadCrQRhI6krG2eWFlm/Zk7iZqrpIUcVmiI3DzLs4vMzs5SNoZB\nY6GSkKKfZ0VpBqpDzCwuUTnjXKLZJ8jNL9MtxeRDBeU8+Z1bWLn7doRvY9b1oTxDFAe0ybh/yGd2\n/gT5pZDnbttO7blns27HGMt//Fn8xQ5OOUfJclhstWmVA1pBBztRBFKSWZJG6wSDW7awYcfzOfr4\nEwwkLvuPepzxrBh5N1xwccx3vtMrrUocjE7oBqtYlgEJntsTCbXbbWzRm4y0LKtXypU8JUOybUGc\narSVoynBLeXxhSENQ2T/CMtBl5yXBw3tWOEoSdReIZv5JiU/o1Yu4fQV0Y5FYjxmFldZbiVEiUVo\nFBvOmmR86w4+/+m/ZvvIZl74xmuxiw6ZsHFUysrULHsOHCUSNaRK8Up9fPWmB3jjC3aTHx162ph9\nxpLFIBBrI6iW7dK1NHg2cZwitIUrLIIkAqGZTxtMrBtlKYkZrhYwShN2NbZwaFkZnqnQmmoh7JSS\n7+Bphz233Idfi0AqFqamML7N+jPXMT4+yr59+3nFCwXDJ+fgTtDXQ/onozQX9rJlaBg/9Viu+AzO\nCyrawk0CbFsyahzyGLQNek2vZ5DITGO0xJYufRsmGWh3aJBwzoUX86Pvf4OFpQZmaYllnTIysZWi\nsJBZhCdtSrk8rSAidW2WrYTh85/N0PBWnnjsAfAkfpondmLSoM7cw4/gbOqjU/WIshgLi26nSVcr\nDrY0J4qS933u41xy3qWkpDz0yc/h9Q1gFiNkGCCkh2tplmbmKJYLpJ2IgVKFUBhUo8H0448Rjq4g\nleTI9Ay7HxiBq07ALXDJpRE33ZTvvXbaoLXqJYq2npp9f/jhR7BtB6Oyp2j7BvHUIaNlWSidgYAt\nO3ZRrOTZOFDlru9+E5FmNIfGcZKQzDiA04PkoWhGsBDlqAqNu9DFpo0jBUmq6Tg2890Y2y7Q7qzy\nojf/ItLro7MsiEZyaCxYkyoZJPNTxwhSGMlHnGp0aNUF4xOjpEHMYqP9tDH7jG3wVc9QhzKgPXsN\n0pYQqhRsTXUoj3QFCEllpMpqEDB/qsX+Q9PsvvwC6nFIq5vjil94HkGWEkcxUjn4iaHP1+w8dx3D\nI4MMFirkY4uyLDNamOD0LZNUcy7XXHw/vBX4GNx99ySPf+Ug3bzHOZufRdfTVLZtYjJfxVnt0pYW\npYlBKmGKUAJe8zLG/uK9NAoWKkzoCo3MBCcbIcdOHmd6apq5UzP8eN9jrEw1KXdjNnsVJkWObneV\nKF2h6HoUtCBwYEUmGBv2Tc+j6ssc+vYXsNCYWJOiCaRg1Y/IKiHduE0+7TCSr+BG4IeKgUCwvlQj\n38n4h3d9mA+98dfJOpLnv+KlLEcrWJUC/ZaP1AnDro2wLPLj4wSOYm52ClYXcaOYftenffgUthJE\nXpkD9/hPbfIvuSzqtZlo0BZIG7TQKFLSLGap0SIyGm1itEkxZCidoHUKOsOoFIzCShWWNoRhl2MH\nT/DE/qPIOMFKY5TS6MwhTTKyJCGNAuIkoK1jFgk4rmOOxZKDsc3eQHJAOZyMBfNhzFIS0kGR78sx\nM7NCrFIoFiAnETLDxiEzEUePHCXwyoxIA9Jj/8mTbBgZpGul/NNN9zxtzD5zpeM1TE6qDGmokUWN\n9DIcKQjaESYVCC1JQkO8FBPGIcV8gY27NvDw4UcpjhfIVkOOHNtPviIp9Q+wErVpOIqXv/oVfO3m\n71PuF8SqQ+ZofFdx+vnbWawf4ovXjeP/9X2QB/X6AQYf+jNWCu9lB1Xs33oxxXedhDsfJ5e2GKmV\ncdoRxZNNZqugk4T2V27k4a/ciCEmKrhEWULiBGRDBdrtFXbgUOoGnEmOtlehnSqOBEvMRm3Gav2M\nqBxRN6EtbepRSGx5VKuDTJT76XS6RNKw8czTiaMmT5w4SmV4gJInMUIjleTUiVMESR0cF9t18RyP\npKNAKrq24nCjTph0OTpf51+++T3e+6rXkkSLZHaBTqdN2olYeXg/OwYG6RQcwrBL7OZYUYpyXz9W\ntcKGZ13C3kdvI9p5CH/asNFXDG/MWD5qr52k//sy7pMryZOQip9GHykhMcIQRzGu7TC6boJApcyt\nzKO7NiZLkVmG1oosyzBGoFVGZjRp1tN0W5ZFJiTCGDKlkNLCUuB6Hlmm0TKlVPRJEsX+vU+Q6oxS\nqYRtWxgUmVB4wmFmYYk4bBEkmlzJJkkMOUvRjRXN9v9Qin6YKmJlerJOI1GpIMs0SkEaGZYXOmvy\nTYFtXFwpsEoZgRWwcedWLrjyMuyyYW5pnm2nj9NNusR+yOve82r2t/ew5fx+zj7vLKpjoxTHhphf\nWeGGf7uBxZO3kV/+V/gA8Hl4+PAvMtsIecmb38gVuWHUl29GtALaScRK0iLQEZKMIBO4XZ9Sx8Iz\nKZNGYU0OceFn/hev+NCH6GhBd3aWcG6V9TLHNrtAWK9zSC3zyOoMDSul7OcoZRbRYB8Hc4rjHlDr\nY2zdJDljaJ84zqbaKDXlceqRvQRzc2x/zrPYes5OhO2QxIajJxex7T7K5Rrv/8AHUFqRqoQobjC0\nbpwvfevrvO6tv4ZTrHDRZVdxbPoIv/CaV7KaRNhLbTYkNmfi4wUhB5dnWam4zDsSp5Bn44b1+I6k\nvTBD+4mHCI+fYvqRMlwB3AqXXBSt9a3pp2xfTyaJ/gmv9Sl215OESN/PMzY2wfr1m3jhi6+hUC4j\nbMH55+ym2V6ilca0Mk2UKjI0mVbEaUKqVI80aSBJM+IsI9CKyGhi0+stTLXC8jw0mi3b1+MXS0yd\nmEVYEi/n985pDGijsTJYatZpzh7FtwSNTkKpUEaHDcJQMDr2M9kqP/m9/jsT4umuTAoSDJFKEFav\ntcG1K+hYI42D1B5xkKCJSLoN/LzFznM3MjFZ46EHHuWBHz1Mq53QbMbs3HU2k5u2sH58F/sen6bV\nSPilF17LzLE6p2aWmZ1fpej2c96zL+TVL9yP+F3gV+G+VpH3/cr3OP5nNzDyLw/hO+Ddv4fESem6\nEpFaVCKJJTzCWo0NH3w7jw15WLEGZeEvhqSxZqbZoA+XUS1pO3BL6xTOlvXsOXaYmWaLWqVMLXNo\n6Ix7TZ3JzVsZHJ1g/cA4BQWCiM7iLCOORBZcYqMxpFhxwnnnnMUPbr2VXKFAvdXmzAufRddkbNy2\ngQcevpeBoRqvftVLufHrX+Xv/ukr/MuN/8buCy6gVHI5dGgvEyPr6Sw18ctFcq6HlHbvFN6zaJuM\n3EA/O8/ajZdkdBbmKIsMJ2gyvTCDt2Uz+x4afepW7LJLwx6UW6mnAHpK9VycltXDqlarVQqlIpMb\n1jMxMcHw8DADg4Os1FcRUrJ/3wEuPe8ifumF1xCvNti+bZKh8TGGJ9fjeDZxHBKmCbHKiJOEONUk\nKiPJNEmmyeIMnYDJIAoThOWQKgHKojRYRgiPVqeLFFAtV3rYKGEhtWJxZo4kU2zbPIkLHDh6mGKx\niAk7HD5+ilq1+nQh+wyuLFFIN4jIlCAMExqrLUycUS1WeyA2NBaKouVSLQ0wVBime7iLrqeo5YD6\n9BI6k7z+NW/lpS98Fa94/Uu4+Bd2cuLI4yweWuLPP/m/ufyS53POuecTJTF+0cEsfoHNhyL4EUS/\nB+/+SJUL2zVO0z7dpWmKqSSIQ9q2Zi5dBatArAOkbXj2xz9I/ZzncMkLXo+T9xBWRinNmPrDD3Dg\n859FepKCLlIjz4hf4eGHH8QUXGzPZ0F3OeorNg1NsmXDFvo2rKMTJNTbq0SLizRXVsjZFgP5Ik/s\nfQQ7l2Nc2th+ldGJfrZt2chS/Qh/8N7fY3V1li1bxojiiNNO20Y7DXnOtW9gZPt5LK90eM3rf5mC\nn+fL132WT73lV1j88hfwxXGElRL6glMmpFGwGBvoZ2d1nOXpOZamj9PQEX4rJbQktdoAY/3jbNp5\nPnsemeyRF26Byy6OKRRyFHJ5crkcruvieR625eI6OTAW3U5ImqbMz8+zXF+h3mwwW19mYsN6pG3j\n2pqpfffyyQ/+Hu1uCyTkfYdWq46wJY7jkaYxSRaSmJ6HJU1ThDQIqZGWBgISQrSvSEUbTYs0VVRL\ng7iej2tFVPwSxeE8Qhuk6bEzZ6eP84Zffj1l22BsFyUFY+UK+9xRrnjTO7nixS942ph9xvYsni3o\nRAFxHFAsFinn81hGsLq8iiUtsjRmdGgQXzjgOwxvH6JvIoftSa4YX8eD9z9K2kn53r9+m3/+27/h\njOedQdO08Qo1ZmdOcMbOHXznlm9x5LFDZA2NM9Tgt17b7rFpPgtf+PsBxo85vCapkT/ZZCUVJK97\nGZ2/+RKiG+EIn6XxYfzFBD+Cm977fgpXnM+Jm75HjRCZWRgV9/i6QlNWEsux8FRG17aQZZ+TjXnG\nzt2FWG6yrlRGRprOaoNYGdJ2m7aAMdcj02CVCsxkERPDw5hTIY7IyBzJsYNTtDoRl1/xfD760U8Q\nJSm/8zvv5vrr/4KB2gRZFGPFmh/e9G0evf8ObANOGNM4coLdI8O0141x9bW/y/dueQ3dxlFqtRoD\nTo5Gu0U3TEltjaxO4iU9V6Mu+cTYzM8e41R3BZbatCs2JZlRWzJs2BHwxP5cz+a1hlfouSMBDJ6X\nw5A9BeiuVqssN1vMzkzRV8mjU83miW10Vk8jyzLiToZr+4yPTNJutGjGq2jBmvceLMuGNYqlI3uz\nLAYHWxtSYjITkyiDa1VQOiLTvcLDkSP7ec/b34nrSjxPksv5aEt7tlcSAAAgAElEQVQQ1lvowObA\nkcN0SwWWjs4wumMTf3njjUj19LriZyxZqjkXO1No0VMUxGFC3s9RyBfodCN8xyIOAxAOWSKYmzW0\ncUi6MWk3pbvSIOlmnExitK3odBTTS8t4sY9jPOxMcs75u0kWM44vTPH2t0zjXpfATji2zeH2P+zj\nncWNNFcUectQrzqc8buv4dTf/QuBbBP29bH9dS9j6c++gHZgbG6Z7MbvsckkZMqh4wrsVBBbgpbI\nyDkOky+/mnXbd3Hw//97XrTpdI4t1WkXB9HaIVtpsdhtoRaWsTdtZOOWSVpTJxFG9FyLSZeunZCv\nDpCTFpbnsuRb3PyD25ibb/KN73wXZQponfLnf/5JHGnxR+/9Y9xCjt9757vZ2j/Cy17yQvqLHiem\njnDx865g2/MuIzYShcOQV+Bx10FmEZ1uhGUMFRsaOsPO5SnmS6jlVer1FiKJ8VVKyfGpjo1x8J4S\nz7pqdW11Cdn3qA1rcG5jeqDwJ41gYRjCmptRCEGz0cASFlmsiIOUdQNldBLh+TksA5Z2UYmEtT7j\nNXg/WLIH83NsJIYkUzRbK5QHBxhfN4odzeO4vTLD4MQkueIQ0/MnSZXirN3nsnjgBJ4UCKVIA0XY\njlH0Ro8j5eFZOTKZo+rkOHngJJMjQ1x+0SXc+f27fm7MPmPJMjI2wuLiCs1Gi07QBSSN1TbKGOI0\nQSKJgoRMZuDYqIUOzeWMoZEB6tMrmEziGB/PdgiVxaN3HmB0fJBWfRUpNQsnZxgdHqG11OCiCwIu\nHa7DZ4CH4XMfW89r+rcyPC8Zu/4POPCeP6G4GnLsr27Aizs0qhZBfRX10X9iuZpS7qTYrkNOJRgp\nsF0PL41Ilabt5rClhRvB+NZdBNVBTnU6yMcPMttoMDw+weG9RxgSNoWiTbVQItSKnOUTOD5hmpIz\nBmlsKqmFygyR75KPNNdc+3IW7YwoTlntLjA738HSLsvLS8xOz2CUopF0yXW6NK0mi1OHOO+1r+Ss\na6/utbVLQyH1IIyh7HLpc57L/of3MD87hy0FFWMzXi7SSBL8nM/83DS+41OSLp5R2O02fYNjHP5x\nfy9ZboDLfifgrz9fQj15jrLGL5ZPtcr3NvpPzpkYY7CFwJEeKrbZtWkDnXYHLWwslWFb+bUWmt6b\nEUkP5icFCM3W7dt51RvfxNbTTiNb+36d1gyf/OCbybJllFBMz55iZGwd5170FmyR46orL8eWcNvN\n3ydMUrpBiK1tctKm3aiTRQGxDUHSxKkM40eKE3PH+acbDz1tzD5jyfLYocOorKevE1auVypMNUaD\nrW2U1mjtEEmNMAohU6Q0zB2fA2NhMoHQgkKtTJ/rkqUZajEiZxyEK2icaPGj47dTKzr80YfqcC3w\nHrj78DDr6zu5aCHEecdrkQP9TDQTTqQR+tN/QyMNEPWUsjB03AUKmYVr5VHG0I5T0kqRqFqlU1+m\nHKUok+Eam5pls+eTn2FZJJTCmLpKcXwHd3mJft+nqzOibtDrBlzpUF5fZqXRoOS5TFg5po1mXgWE\nYYuBaoHcqsuX/u4GTukAyxXYuSpRIvBFgjCGuNPt/fe3M8ppSF6H1PxDZMvfhJObUP4o0q0gi2Mo\nv8jZ17yYqR/ciUwyCr7PcqxwnDJVz2X/oSMUxscYzfnkMtDaIKol4iRmsbXC0qMT8O4j8DY4/0sp\nrivoxvqnHCyq10ovBAYNBoT4KWi3EiRJTNGNOG/XFr522wNI7aBFjlQLjMzwXIdmp8m5zz6LDZOb\nGRkeZ2JigqHRCsVaracalwLLJFz38bdSKHSIYxuVCnSoWZ1tsO/umyhmionRkMvOs7n6xb+CZQ0i\n5QjtTsw999zDPbffw8pih/bBg+zevoHF+hKDfSWc0hh+yeaBhx//uTH7zI0VJ72+ISn0UzBp6JX4\nEOC4T4pzwPNtsiwmM5pyro98Lk8aRGRJzKte9VJuuOGG3tdiyOU9hsp9jA8Pk+mYq16yl+J36zAH\n7TcJvvK+03jBDFhOjoev/xIl6xtsM71u1KUoQDg9qohwDUEGqecirYg0i5Bujtz5z2bbb7yJ6lyD\n77/3fTjZCjYpQmZ0myGFnI9JuzTdImEc0780zdjkRh4/cgjX9hCpYiZskB5Z5bxSHpEZjmUBCzoj\nV+7DKTqsW8k4JmO2rtuFZwLCeAlpxZg8GMvgSIs4Z6GKNuUw4rRqlXNHxpi/ZZrOdx9mQB+jNTbC\nyrVb2PLSDRgj2XH6Wdzz6b/p2dFKNZySZqrTpDpfZ1xalCJFvlQiUwFhYlC2zfYdW8kPj+GOjtJ1\nD1LYMIe/F849L+aOu/596PxkJfmJAOnJlceWBk8IrrrofEyakCmLDBvQuLZACcOmHVsY3zDBuedd\nRLVapVarcfToITpRQNnq9Xc9+siD3H7n37DSOUjBKfUwsEZiUlAmoDm/j6P7NzA4cDq18ixOdivh\nisF384iu4vzTCly4+yIK/Vtxc1sIo4BbbrqNW++4g3rzOML5H6rJAwvLAmGepKrbT4kypdVrjfA9\nH9excRyJ7VSxbRuMIIkj4rTFwECFvfse4uzdO4njmJznUCyUUEj2PLqXgVqdZ5/1WE+W8VW4674r\nufhUiRd8+N10f/VTnBOBiVssVhNkJGmpFJWleJaFE3eZHt7Gc9/7Pu78y+uxju7D0ZqZB+/h8cN7\nqboForhOZoGXSTpG0S5IDIrYK3Phb7wTRwoOfPl6NuXbREEXZ7SGjaBkJNFci1BKprMOaV+JcqVI\nPN9hfL5BSMoZuUHmw5gzzr6IPT/6N2r2Cjk7JQhdUAbXctBzAS9OB7jsrMvQ9Rb2ckizZLPwwtMZ\nfssvkMv1o7waxiTodhM7ynBsF51ziOcWCXTChJenL4uZa9TZsXMD3WbKro2bkbkcxhIYUohSlk7t\nonDVHNwCl14RcvePqv9JZPTk9eSZy5NKCCk0I9USZ2wYYbBUotsJ0TkH15V0gjY7ztiBV8nz/Xtu\n5R+/8Y9s2jyJYwt27z6TV7709WjtIUTCg/fdzsHH9hPGNoEfMVpxsYxNmmREHY0lJEvhErZ3PoWB\njGbjYSw/YrV7gLwTkZcCS9ZJ20dIu0XiWPLcC/u4/OKXU6ichrH6+NLf/vzR4mcsWSzHAiOwMCB6\nQztCSwqFAlIYXM/DcRyKro2UGi9nUy77SOGjTUitthmtAqrlGsXhQeJuRGO1SRBnrKw0GO4f4A2/\nOoX9CQMvgqmhGo3PD7Lt+AJHPnAdXtZBlSSKLsXIYT4XYOwyYrWFZVIiL8fQGadxatsQO1/zJh7/\nkz8gJiMfQrG7SiZWWCEhp23cTKHwGBSCeWFjG8FCfYl2HNHKcpQmNjAynGfu2EmspRZbtmzj9pkl\nFkzCBbURyq7DVBxzdq7AnnadgU0bsUKbbbbFv333q+zespnStIcZtKkdn0H5klgqrNRjBsVf3noH\nv/GH76D9+yOUt5+H47isfvMODgxOceH5Z9OcOkX/6AiBBbLsMn18mnJiMT46ysz0LBvKPptGKgwO\nFlk/3octLHAkgVYYo7DjlNnDm9nwkh/AO+EXvtPhTz9aQfd2zAhj1m6/emO7olf7R5he4tiuAJVQ\nKxQwhQH6h/qZmZunPDJCuX89qjjA6375rXz9l65icKiAYAGtbQ7sf5xPnfo05z/rfK656sU8+5yz\nuPP2f6LdlRRyGqvSkyrlK0VUmODYsHj0hyzUL+a2fR32PHI3z7tyE4f33c9YzbBlwxh9tYRqMU+x\nehaEB7CYJ2wdgs5DqOi/hm/9b7vyeQdHWkgh8f0eYb2Yy+M5AmlBra9GX61GX6nUu8UJmwRBB60k\nBgtDjOM5FLwc62QRd2KcR4IDRN0mw9UBhKc5Y9d0zyRzD+z7xmUM7jlJf+ZTnQ9ZLAiqH3oX4bv+\nnAXRazKc/MVruevb/8ymzhImtuD2H1Mn48CDj+EahcAiVhqBpGsU2nURmcS2DC3HB7qMJxFtITly\nw1eIPA8ravHdQ4/0nPWzdU4rDrBv9gSbnnUWizPzSHySOELpGM/pwykUUHGG6ctz/PATnA5Uml0E\nHpt2nMlh6dBZXqI1nGfnEyv0n3Maz3vVK8he9hwK0iZd7RDakLoh559+NjMnjjF8+mmEbahNjnFy\naZpIGIqeRRx3cWsehbEa60aG8T1v7TWxSI3BsgTdOEDaATOHNhP/dgmv22Z81nD5FRG33Zp7qvVF\nCvGfyCu9krJF3vco5z0EitAIGq0u+YEBdl50IZdcfBn1dsC1174UaZoIoNtp9FwpvkOsEr5zy37+\n7KN/jAigPADSydNszZOqPK4vcS0b4VtoQobcPHfd9FUuf9mvoOoVfnTbzbi5FvmcSzMcwE/WMVp5\nPsb2SDmCVBF5P+mNYj/9Xdj/XbIIISzgQWDaGHONEKIG3ACsZ411bIxprD32PcBbAAW8wxjz/Z/1\nnKfv2EDO8ZBC0FfuY2R4GBtJLmcTRF2SNCWOIlLVRqcaKST5fIF2q4u0HMrFPvqG+injMVwYYP/c\nPF6hQjEVGCM5Y/dh3Nsz2Azphs24PzBMtDOSkoNJBF07ov9lV7Dy+3/BrJXibDsP59Wv5znbt3D8\nAx8gsjrk0xbW3bcznKWElkVmDIkQJMLQFab357PglHJ57oc/Qa4k+d47f5NSmrBJJ4RRykkRUVpq\no0SG6itxVCf0j40wNrGOMIyZXmxQWK2z/cwddFdTAroUjGL24GGqnsuWSLBwYpptV7+YRx/cw3LW\nIt8V/P4HP0LnLX/KqUML9BdGkG2B6Ze4gwU6d92LWzDEJmVo6y5sZVN/8CHm2iuISoHOUoskMQzY\nCWdunqCW93pS1CdJKVojpcYy4Eh6TpVIMXvkcjb+2rfhr+D1r+vyw9vya6O/a1Biftpk/BPqSqYS\nBquDYDQnZ47jWYbS8BiWneeOu2/n5m9+FYc6lX6HnJ+AUUBIppZxrDyVwRxbzxzj0bumaC1EFEoS\nkQnm64ZKKaDoxfhWjkzHqBbsP3A3UQi7z9uIO76TfDVkyxmb2bzlbA4dnCXVXRw9gbC3oLMWSkmM\nSMA8vYD1/3ZleSc92Hdp7eMnNXmfEEL8f2sf/+F/0OSNA7cIIbatOV7+3XXFZReS83NUCkWSbohj\nO2RRbxNfKlVYmJ+nPFClE7R7/UEqQWubocEq1UqNThgStzV1O+TA3D5sy6V/aIhSrY+rnncNhdIb\n4PXAG2B137MZOrlEoW+A0tvfxt6PXY/fjEm/+F0yFTKZeRxZmqJ0/BDf/8xn6VcBUls0HEMhVcSu\ni6s0mTBg9TqhlbbwNfhaIxxwYkEUBPhKYoQgERmWkPSnDi3VYcPYOsS6IWw/R7bSYP7AETLbMN2s\n85pzz2bq6CE6VpnVLMXr1JlQgvV+gbaJcLTL/fv2MJbPsTTbxqmW+ZeP/jmsHGIsXyXYNUDwtW+Q\nX15FbiwwFCrC87bglArUn5jjy5/8NGF7mU7YQXlF2o7DShqxtLTC2ZPrsDKrJw1eG881gFEpQoIt\nHTKVIZKAYw8+m41v/jacBs/7s5DBwYzFRav3eL1WqIF/t48RQqCEYnJ0lDRJ2HT2dpZu/i7pvIUw\nmhMLB7Bli2LOUKpUkCLAdlxsKYmCgE6UsXXXKNlgl/7xKjPTC8wvBpRyeY5NpUxu8AisAGk0ibJo\nHK3TaLss3XuYuUaDSy4eoeb1sXX9mzGs47TNYKwMhI/nPwvhWyBaqGSK7qkf/deSRQgxAbwI+Cjw\n7rVPvxS4fO39vwduX0uYpzR5wHEhxJOavPv+4/NaacyWrVsBWNYZURwRpSH1ehsvl2dwZATbkvT3\n10jT3oFX0S8yvmk7AxPjeMUixijiuIm0XCxLIoSNNiDFCrJ1J9wMfA6OvLFOv0yYKfZx7iuvYOy6\nL9JqzLP6J39BN2nhSJfJ6Xke/e3fpKI8IpknsRMKCWAcZJLR9B2SQoW+5QUaJkPnJJYR5FKBkTEP\nfuhdGNPTkdqZC3aCnRlarqCT81kpFRgvVeksLNAMI+aNwi33s2PrWSzFGUtpQpbTLLUCNmmXmmUz\n165TTj2Un1Ku2RR2n8YF2SYOf/1mOkvz4MEDVhvrIx9n4t4nmOwK7GqF5qZh4hNzPLDvi5xwYTFN\nCJOEtrbwM4mJY2QGTbvCgyurXDpSxZEW2ih61GCDZTtIYZMqQ+bE2EFEc2k9db2T2nP3YX8VXvWa\nNp/7VKEHS5TiJxT6njj4qRWm6uUpe732+QuvvIr8x/+KPiHpNGYp2A7dJKFv1Me2mmjLxbFsfCnR\nlkcjjUnCBGzJ+MZRwlQzuxCgIk1Xw+xJged4SEtxarqFlh6lvM368SpPHDnOwalj/Ol7ryWNjuO4\nQ2ROiiVyxGmEYymU8jGyimtvohV/47+WLMB1wO8DPz2g/HSavJ9OjJ+rybv06pdTrfYjpMMWDDpT\nSEsCFsZIoiTD931AI4QFxhAlIb7noZQiRQM2fq4fozUIgdKqt9EUX0X8m4HnwGpjA7s/+Bfs+7V3\nUG22ue0lr+PMboe6ZVAqIXNtdBhhtEtKTOS1KcYWtja0fchkhm1yXPnFv2M6CXjwf72H6MQMlu61\nqKdSooRHpjW2MEgt0ZZD15JYKiU2GY6StBotphcXGRodopmGzDfbEIY0ClVOBgHNMMFym2yrDeLU\nV4myGKevyJJlMXn2mSw6kpxnk9mSTS96Lvfc+A28NCNSCfff8SOOmIxdfplzRY25Y0t8tztPPQcd\n49ARCt8SKCPJl4poW+CkkGSGvVMn2TXQhxOpXtVRZtiy54xRqUIIuzd8lKWEYZupvZdR+/V98D54\n3b8GfP66PKx1GSu1dnK/trA8qeDuq1SIAoNXGyezinzmy3/Lde/9ADnXx3ZqTAwOkVgxCysLlKsK\n6UIqE4Slybop0ydOUi4NkmkDUrJ13TjLx0+Rz1yC1ZDFqE7/UJV8qUIYLzMxPI7VbnLu6Wey79BR\nbr3/Mc68+GwWTn6BoclfJss6+DojMA0c0WLvvf/AXV/bg5M/9v+eLEKIlwCLxphHhBBX/KzH/L9q\n8q67/ouwhuO8/PJLueKKy8gEvZZ9KZE5iwSDpdewHoDj5siMRkvryR8QZQxGr7nahejpncU/wpeA\n34ajN/Yx/jKodPX/Ye7NozQrq7Pv3z2c4Rlqrq6qnmgaaJo5IiAgqCiKJhFwJBLBKdEvaoyvSYzR\nJCZ+MTHRGGMSTRRNnGJUjCIOhDgBAiKCIFNPdDc9VHfXXM94hnt6/zjVRRvf8A3v9y1yatWqWuup\n9aynztn7vvd97WtfF4ko2HqwQyGqpW857xNqMUFr8BYhA5n1HFI1UgvjFlKbEXk4/MjDjB6/md7c\nAomKqA00KNodnDcY69FBUBAQUtNtpjztpZfznX/5AsEUiDIwPjmJ6fc52GrjvSGSAUzBqWedhgua\ng488iLUdNtQHydtLLMeBjZvWM7FhEtWosTZtYlxJIlK6Y8M85fm/yO0330QRPL0o5cgAtKXmS3Ia\nsXYSVXhSK/G+pCkl0nu6ulLWF1LiXDUn30Ny1/5pnn/iJmSQ5KUnSQRaCiKlKUuLR1EaS2R7zO4+\nF/PqBtF8j02LjoueUXDLbQlBitVzyn9W+lqYmaNcP8bY+o3EZoGhznY+8N7f4I//4L0MrzkTFyQZ\nhmg0ZnS8QVQX7Nj9ECb0EaJGe6Hg8GN7EWIYaSCEHms3bMA5GBke4tCBx4gTzVM3b2KptZfNayYY\nGRimvmETew7N89CjLf7onZ9BlJrf/j9HMPkppBPnIPJ5bv6Pe7n7nmVEYy1Cngo89F8G8v/VzvJ0\n4AohxC8BKTAohPgs/x/Y5L37T96JDFXZtGJXgxJVYnhYvd0/c9sDCCFRHN3mPQhLkFVDUgpBENsQ\n+x+EB8FdpohfOMbiJ99RmX+6QFxaZlKo5QN42cE4QxTAesuAkOw3ozz1Tz7E2jUp33z7b3B8mTFb\nDyz/1R/Rpc541mV/M2G4zCmVo1AeK/yKNI/AYjn9qsspTzoOMz6CnC+oO8/0gYMcdo5SOoaMYzhI\noppkefogPmqg8hJmDiHXCdacvJHjNm+AqFYxpnwgyksSC/0mDJuIxnGbOPsFv8SPt++grSRxUbKj\nqRlxglrf0Y0dhZPIICmdJciA81D082r1kmBCoECx/dA8505OMtGEQnmESJHS0IxiEqcJPlCWljLv\n0u8WHNrzLDb92rfgOnjlq3Juua0yLl3t6AuPCI+Ldqe1BgvtOdaeMIIrlijzNo3hQd71yS/zhpe+\nlMmRQcaHB9nUSHCmQbHsOH7sKZjIEg0Gtj/0CINJzK5HHmOoXoO0wAzWef2b3sbB/fv59Ec+hpvP\nqEVLtMwyZzzzqVAMMXe4pNXK6ZY9ZpYy/u2zPyQigbolDyn77/4scs+NvPsP38ncQsbEmtfwp+/9\ny/93yRJCeBfwrpUb8Szgd0MI1woh3s//pk1e8AKPw4tK/FmsVMsiCGQ4Zm36z0MEYfWzAQK/oqrO\nSvII/hX+BXgZzN21hoH5mMJ2EMbhCMyOpNR7lpbqI3SELh11r+hHEud6TCQJfjFjrw30ZY1CtrAu\nELmImshYUJAWObHoM2QcJTGlkLhgK5qsjtj++S9SBE9kLDYInFIMZW2G1kyxZ2mRupD0Mdgy0J5p\nMaRyTK+DHRpg4pQppgaHiKTGiRInE5youtxOhIqTpjQEWLtuHVv6HXbO7McrwaDz+EjQddVEpVAr\nu7StuFtBJTywfRqrLFZYgldI4enGmhu37+SVp59KPAS2FARZkgmBUFFlIYcEb/AW9j1yCZte9y04\nE17wZzljY00WFtTK46lm7j1uZT5fEkcxBw53aY6Nk5fQOOsy0HVmf3wHKsto4SnKDkpYBhsj1Aea\nTI6Mo6OEvJ3zlKlTUWmT0zefyb0/uRdHDqbg5q/9M1JEvPG3f4NaOsi9D9xP/8g4S61H2Hzcy9i+\n/w686TA5dRxzvVn27v8C1nYZn/gFJobPY8tF7+ekC69mcdf1xL6JWPNE2fD/vM9ytKT6C/43bfKO\nrkKSozXuile6rIw9jxrVBF8dEo9SKI55g4qAV+VMtesEj+SLVQl2HRz+0iSD/T5OWerWsFSTnPWl\nz3Pzr7+d9NF7UWqQnpKsufxylg/Nou+/m3XlMjOffDd7rCXQ4XBDMmBSjLL0UTixQskxAeElJtbI\nxhAuL9BFSS4CsrTEBIogKaSkr2NGXI5ZmGF9c5gDRYvhOAKf0C5yotEavbZn3cQESRwhQiCWAqck\nLjgK60ErnCmAgIwqhElLzemnnMbs8jKdMqtUHL0nrkUYU1LkhqLICXiSGiSpoTRtjFUrGgiVOn0A\n9gXD3l6X0/UgRc2TeIdwGr2i0KJDoMgyfNmnv7CBBbWFsYt3Ed8Av/KKjI9+pLkSIX7F816uBox1\nKYhlln96J3P9wMnnXcjc0iEGhxVRo4nWAq1jhLQYk2GzwGy/xUC9RqM5VPXhGpp+abngrK1Y7yht\nia8FHn3sMW777jdYu34TLhpgcPxUityxsPAgFz39NO67425O2ByxefJsthx/LUE4ZIhB9OgU1xMp\nQXPThczf9Ql2zbzrCYP///bwVwjh1hDCFSu/L4YQnhtCODmEcNnRHsvKa38eQjgphHBKCOHm/+r9\njk7agSCIFZuFY0Sej1ImpBCrB8Wf+0wrXeKjrwh+gLjvMBQQzh+hf/sQpYT89JPYNzJICIHOw7sZ\nHGpia9AsPSapMf7rr2bDG1/PdJBkCGI3z8m+zZSI6OWKpWDIQ6AlPRnVlF4QilJpFuqKs990NZe+\n8VU4GWh4h4ksTnqKWJGPNrnwtb9FW2mU9ejCMd5o0Mgtg3FMogRREmEoSSJweYYS1eBbRa53JMoh\nbI60OcIaKAuULQnWEHvBM84+lwiJsa4Skisd3gvKwgGCej3FeUucpIAk+EqgbrUfskKV/4/de1ku\nApkrscbgXKU6HGuNDgZRFNh+D1eW7N95Kbwe+Di88pq80uXyP9shECuNytbSLBNr1zC+ZpDNgwVq\nz+2w64fsveVbJDJgCUidEsWDiLSOV5I4VvhQYEyLxdm9tGYepeY7rBuMScoeW6Ym2dhYz8Vbz+eZ\np1/A2nSAzoFplOyj5STbHtjNnm17ed5zzkUHy1AzJbddnAyg+1hfZzC9lJj1JLVnsv7pn+bci89/\nwhx48mzylKbaEsTKlySEo8DlysbiHcsLs1VTDF8hXSt/EYJfTSARqsRB3lS1Sn8Vlg89gyIvWZaB\nWaUY/dWXgg9se+cfET/6MD6pgXCMO+jcch8Ld/+UAZ1QINFlnTQkjHlFGg2wGMVk3lD4sgowrcii\nagJvvOW58wMf5TsfuY4RPFqVrLGeUadRmSFxkm4ySC40Qcb4ImO4U7AmrrO0vEgt0RyaXWZszQS1\nhq501JzFO4eUoKRHC9CiOtM5W6IAb0vwDm8tI81BLnveZXgEJji6ec5iu0Wa1kjTlKIoCE7hbI04\nHsB78Ct6Z1CVtHXjmdeS7+zYhssK+rakdAbnDd5adAhECvrdNkWRs3/7OfSfqeAAnJQ7LriwXE2+\nYzljEjCUnHLmVvJ+CxkN0nGa0cRxVv0A9RRKZyidr+y6Q4yXMVZEmKApnaPWSJAxtJfnmD+4j9gV\nlK0FNowqtmwYYOOYYMMQPOuUrWwcg9NPPo0Lz/llHrxnP/W6prXc5kc/uhNnDKwYu1rdJzcZOjqL\nXniYdrwDqV7+hDH7JBIpf9ZTIyAIweFWSi4VLIuzM9xy0w285OqrITTwMuLo4lWWJVrr1fJMCgmh\nAwvA02HpkYjas59DsmGc+LobqY+N4DOJ0F1y6/FrRnDtjKbrMvdXf8GeSNAUOUYFSl2iXIJUlot/\n4+0sbD6dW9/7dmTrAJiCIpnCtJeJpYUoYa0z9GKB94JaiIkQGA9OO3S/w08/9VdMFMuI+gReZaQG\nyqxkLG3Q77d55vv+gdZHPsxxUzkxMVpWtPcQFML6CryQCSCvuO0AACAASURBVIeOHGZszTq8s0gR\ngzU471nuOnRjgOc+8xKuv+FLDAw3SWoxHdOj2+2ihKz6INkS/byNFArrBR6LFCuwrwhEPmFfmTHj\nQdlAMCVSBGppitYNlM8QLsE5QxSGOLD/Yra+7la4Dq69NuOHdw4B1Wdf7eRLSVyLeP6LnkcqHQfv\n+BZl1qZ0msx5hockS32F9xrvq3EHW0qcECRxjC8dSgQSJ1FSEtUEcaQg9Oi25smyLt1un7VT69my\ndgyphiicpd10XPGCC5BJyjlPfS4uRHzsQ+9hw5bNnHHmczjxpLNQKsLSQZtHqXMBIVr+z0H6M9eT\nJwy+oiPlXUnp+hSmQ6AEn+N9h26+TDSQcMVV19IvqrKnEoWurjRNV1VE4OdLgGAte+/byykv+UXG\nSof5zu30QgFBM2M0m699Nd0NkwgbkK6HFjlRz2CsYLSQpNYQO0jUEG1TRwymRArqT30BJ/3uJxl4\n5gtxSUlfl7RijXcZEYFDMsKccx6HYoh0IC5zxlqLhGiAkWe9iMlfek1lDBo0o7ljXU/x2Meuo/XY\nNIfnPEkIDChPFDlsWSJCgRYOjWVqYpRUeoQrkL5EmJzWwizSGVzeZzhtcNWVLwcPeeno9Ht4AYW3\noARSQppEeBxBhsoHxXucc5UEt3Asy8D1991Dz8aYosB5j0HiQ1mJ55UlRaeNcJ7De66EXwP+BX75\n0pyhIfdz5kJCCPr5EhPHHc+3bv4x+5YkgxObEMYwIhW/cNwWQtHC+xJrI4wXFDZQWEfuBIWLyK0m\nKxWFVTgVU3qJC4q83yJNYGRY4/0SCzOPMLPvXub33MuQ63Dmho2cNjnJsHckvQV+Ye0a1gfJ9I+/\nywN33EBnYS++M0O5sI3l5d/Eq799wph90nYWa0q0UhhnSNIaBIezDilAywQZJ0gZ4xHEVGhBCAEl\nBXgHwVceHiLgVx76MbnEkcemOaE3wj1XvYlxHSikJESaUnu0kozXFAdnFuirgJHQD5Y0lewZrGHq\nkwwcmWfcttjzjx/gYBwRFQeIckl+6BDrwhxz2QKFFehYI6wlCQlW1Xj2G99CecpWTPI5Dt/9PcZU\ngg+Gjgo85ZynsPfIQUqlMLYkEoG4dKQ/vRPfnOLu6R5H9k1z6cWnoWyPlDpSWrwUWCASkuAMkYTC\nFYSgGW7UKy5VCGA8wsLTL3gGN936bcpuHyEVaa2GRmCDJY4jQtaHlZWfFSNT6yXCWQSeUtX5wcGD\nXLJ5HNnPQDVQqalqAVOSZx3yPMfOnMDD7ZTTz8tJvwlXXVXw8Y/XObbtpgRMTEwxOH4CL3zru/HC\n8aPvfpXa6Inc8C9f4MHdOyiTlFGRE7ynpIYWVWIb7/BKIEVYkVWCVCUYbwlSIVwNjACh0FITvKdR\nh9qopx4voVxJq9snCMXQ4AQjw+PUk5h+4QnlQQ7d+whZXsflgXr6POrr1z1hzD5pyRKntWrUOkoI\nSJSKK3gSqFisAAJx9FwCKxYDHoGugkNKfKi6xgi3QsCrLt9uMVoOUNAncmBljSPasyEPLCrDvR/4\nGJPeIXxgSUTUvaVfwvP/6IOMnbSBH1/1WgZ8TCObZ72RGFKEDjSWDvDIB96KUI56OkgeSoxyiNLT\nEyU/ufFGzmpezX0/vp+ai3FkFY08SH704T8lK3NkyJCDTUK/xUgW6DQHqU2toT0zy+YXX8P9t36e\nU886jsEow5HinCQWiuAq+Pgochi8QyIxxiKURWrPYKPBUG2UZ5//LO68/3ZcWRBr8MGRGwhaIpSG\n0iNX4HqUxLmA9xCcxBG4f3qGLVMjnFhTGNdHhwaOgAqBst3D9TpEjSFm91/O6a+/Hj4M13ysz8ev\nSxEBrAgkUUykE6580UuBOhZDb3Y35553Nh/9m09z535B6TRDUbXg+EgjsTgnKqlXE1BHnbwEGO+J\nVkLWeIGznkaUEKuqrLTSUyKYGEg57aknIActXjWJYkGIm+RFncUjOf25BdJ+zuRQTDpWx5mYnuvS\nb+19wph98s4sXlBlhEBwFKMPKz9BSH5mS19FwyrYjFXG3wpu7L1DycdXtLqn4j17hUkU/VjzvJtv\n4paLX0RCB2XaRC7BOUEkqw58L4XlXbvYqA2p6+M8IALGeVIEpRJ4V6A9KCGoWUkU6vS1xOuCyHn8\n7AFu+9CfEAmPaGhE25LLhGBBlMtESnP2L7+cPTt2sLh7J6U0xEVBSs543mV+736yg/P8yAnOPusk\nBqN8xTErBiRaBlxwRFJWHjfS4VxJMJXyhyKBQnHWCacw1Ez4/h23UGLI84zZuUUagwM4F5Ci0v/1\nPlT3PayMA7Oy40Qxt/9kO+ue8TSSLKPQEVJpcCWUfYp2m8aEwS6/mN4vX0/jzXAKjnPPM9x/d1Q9\nTe+IhONFL3kRij4Ly/tJyjZf/ddv8aPv/wiXO8yKZQQm4IQH4VZK6srCQqLwyhFFGiElZelQClQQ\nxEpQ5DkiicBBVIOyVMzMlHTv3MHFr30lMI4XAwQSkjRm7ZBi/ckWIxTKZ7SWHmV5Zju9QxmpfmJ1\nlyftzCJWKOH+GOREHPNF+FnaxLF+57BKCK9eDB6t9Sq2D9CQCYU29FUdeemlBCxtZ5izFucdDouy\nJfs2TJGjSBy4vMu2T36cW972O8i8pMCSe4cVgSAMCY7gLYUtSXPDsipZe83lHP/SFxHSJi54+iaj\nbwKLRJzziy+nbK6jiBOssHjtQHjuuukrdPc+TDOUlHGlHbz86KMMm5xHf/gdRKmY6UTc+dAsyyZQ\nGIOjj9UBoUFr0Fhq0hMFjyYgrSEUJa6fkbeWsP0eaxtreM75z2buyBK9XsnGteuoRwl4hxABZ+1K\nwlTnFufc6u82GI5IwcOHZujmhl6ZQ3DEOoDLyTuLtBeWMD3Nt787Dq8BPgGvenUOAZSSxNIRectQ\nbOl39zE6rFH1IT712Rt4bHaRnitZai1RWE/hLA6FtY+jdForut1KrPvoiLJzFmMMnXaHblYSVERh\nLL2ixAZJWQ12Yk3E9z5xHfvvvYfgIqRXICVORvS1RsoIEY0wMn4Rx5/+KrZe9mY2P+fNTxizT54w\nuPccyyg7Ohfxn2fVxH/1fbTmPmqfFkD4GBIgAxHnFElgaXycyVddieoV3P7Cq9niPO1I4kMDUdZ4\n2kfeR/q0cwgSyijg8y7WBhyuquxUoBMKlNZgS3ywZMqzLEtCvUY8ciLN0fWUviADprHMa9gwPMYd\nN36Nvuugk4gsBCwCgSFxIF1EdMbT0Ou3EMka0gmcdNSdISCIGwMcmp1hm3F0kh6FdwQsedFChoKi\n38G7nAiLpkS4qgejnacmFNpDCJLhxgiv+5XXMFYfrnYPHxio1SFQjWmzQhvimAUpQOwCRqbctWcv\nixI63RylFVpWvZ8y72LzHtoLQvlr8OvAZ+CKy3IGh6rNP5Fw4dPOYnA4oTkwgDU1vvaV23BiiChp\nMru8hIwSkBFeKKw1j3vaG4MpDbVaSlmWGGMoyxLrKucxKQTWO3r9PqUJCCKWWzml8eQux7kO2mpm\nHryFH37tPYTsbpYOfg8lWyRBVvGCwdPCy4Jg76bf2/GEMfukJUtpLU6olZFUS8ACjnBMBlXJ4P6X\n33JlyzamqISkg8LLdRUb7SAUawPpK36VqdkO27/wWWIbGF1awqqchvOk0tAdEBz68rcQZ2wEa0l8\njMXRU1WZcyiRHDYrugD9Ng2pcCGhH2IWVZ/ZXpdb//79PPiJDzFdenY3a7RdD20dxewROkVg8+Wv\nY+iplxILhQsBR0YaPP26ZO3EBtLjz6Q7OkkkDE0j8D6irSTlkT1s3JjxOx+6gSv+5N/xp84R8iWk\nTBDB0WzUSLUkFZBKSSRBi4IQ+pSmR6+9RDCGWGpGGsNcdfkrOOvU8xgaWUuZO+QxHfyjDcrVRnCk\n8UKhnKErE7656yDL0lNkpjo7iYCyhmxxHuM9DXERD+cRnAn1/4CXvbhy+jpl61Ze9KxLeNMb3sbn\nrvsqe3ce4KtfvYGFwjLXapHqQawRzMwuIgSk0qAUaC1BeKwr6ec9jLU4X32+PMuQUYyLEqxxFEWB\ndwaPwyEpbUYaR1DGRKWkVThGByboTi+wuOMmRChwIcKJNt52mN/1ddqPvB0Z5mjUz3vCmH3yFCmV\nxhMIODh6ZhH83M6yinD5FfWQ1ZGJsOoLcpQO48J61Ebg32Hi2X30iWcy99EbiL7+Y/KywCeBttQY\nMUh50lqSHTspv/hNcg3DUtMhsOxLEg/LQfKLH/wQ0zPLfP397wQfcZJzNITgUKwZLCdxIaNruyzU\njuPiV72VH379MyTOkeAZCIGmlOz/5udpGk0tQF80GTCQa8Fw23L33bdxwe/9JfK4k1j40j8ymVta\nPsNog7RwwZVtoo96+Bd4wV238bE/Oo2JpVMYqkmIPUomFQfLGhIZY7yrziO6ujeu6OIKjfZ16nHM\n2VvPoVeAqA3ywP33IGxB8AaER+t4tZnonVthGwWEh4VWl5nMsTZ1yAgiFaoOf96j010iUoY7vn8K\np7/hQbgOrv6bnE98ocHczEEmJwf4x898AmRC0DUem54j6xmU1JSlARkohODw3BJrJ0aQvsR7T5Ik\nCCEpy7J63iGgI4XWEcZVn7ORJCAcYoUdja7g5UMzCwzV6ySpQphAtm0v/bmcVJVsu/3TNDWoYi+t\nOcVwMsz4xBBh/8P4sf+uZxbh8C6rGKrHbP//BZUMWHnNh1UhhMe7xAo8eD9VcZ73Q32yx+zr34MV\nOfOJIYhAWnoy5UjPP5OnXPdBZrRHmB5J0cVqh1aCRMfkHrpCsX96hsXlZfJeQRwZChdobLmQX3r7\n33Py8y8lTSXr6g3G8oLpO7/L8NJ+xkwH5UuKIImCZKzISESfXr3J2Vf/GoeGJglIEikYznPu/4cP\nsvS1zzMiOiw0crrDY7jmKF5q1p2UwzYgB14NL3nHLn7c2c5cUHSzAmNyZDDUJdSlJw4lSShJvEH5\nAm0M2uaErEXo9Wn4wGVPuxjtUtatOwGtqqau0o8r4Ve9q0oWsnIADpRectO9D3G4n+GFI5aeRAW0\nK8g68wzENUYbb6R/GfAAnF4znHt2oJc5/uzDH+GlL3w5V152BRecdT4Cydj42OrCF4KgtIF9R+Z5\ndHqezEA3d5igyFdEx4+eo8rCYIzHmoB30MtLSuvp5jkmCKwTlFZAPEIvpPRcgqOOMSn7pjtMHxhg\n70/3sf/B3czsHcX0G2SyZMl3KGaWKPbf84Qx++SdWYocH/xqnXq0BPi5yRi/oicmWeWQueB/BiU7\nKiPqwiQcBxyAeKqgX3Yx2jLSLXGRppfGpM7T/uG99B7ZQZAphZBEPiJSEdJ6tE4w3lMGz51/8z62\nffSvacqU0tbxUY7ccyePfuAdZDd9l5N6jjELIyHH7byPQWcpVQ2HpYwjBo/fihEpCBgwdY4c6rEc\nDeAkdCNHXJQ0Dz9Kku/FWs+sm2TtL1zK6NZfwBEzvqmAncA/AzOw5s8N77puGzOj32WuZyiMxRYZ\neIv2jnRFyiMGQlkgTYGwJb4ssLZHv7tEmmdc9ZxLWTs4jBB6hdPl0BriWGJdUckfscIbo0LOimSA\nn+zcRTfPUaIyQ5V4iqyDz0tsVuc7t47AtcAn4JpfzbAMsGvB8dhhw959S2gTkYiYfq+PlLICZaQk\nKy259Sy0ejyybRuHjxxhZnaOol9gjFlpF1TPvfSOfplXgEAQlMbR6WVV38cGylBZj+eloMwdvTzQ\nKjyLbolFkTNrLAv9Qdr5Ip2iQ3c5Z+GQ5fChDnM7mk8Ys+KJVvL/vy4hRMh70ygdgY6BqOKGHaUQ\n+1ChZd4hpF3l8xCgKAJJnCBXxmAhHCUo43xGYsdhEHwbHjz7abQLqAXDaZ/7Z775smsYSev4boaS\nGhMMTq7M1Ecpl3zqk3zxbz9I++47sdbR8AKpFDOhoBGnjLpAI4soZIYEpIjII8lyOsRcFJF02+hQ\n0JKgGaY9MszWZpPOgR3ENqYbFZQ+oRkyclnDiYi6yKqaWziyqS3sPTJN2jCMj57A+779NWobA+yp\n/nd+HdgLfA5uO9Bg22cuYm1tjDi2NOImJiikjil8ivEKg8cJjRMRDomOUmSthk1iZHOAz97wcYam\nHD1XUBQ5InMIYmaWWgRbYzFr02llaBJUKJFecs3TnsK5Y0OUKiUPCaY+SHPd8aTDY4Tmj3nJBX8H\nz4b+Nnjb76zjB7cOYKzhhZdcTK/d4Z4HH0bFEUVezdV0ez2EEKsaY85XzA5JoJFqxkeGGRoeRquq\nuSo1RLFGa0kkI6QMKOGItSJVGqEEkZZIGYi0ItIVjK2URAqJVFUPbyjVxFIhRcnAICgliOKY86/5\nGOFYqsgx15O2s8zOziGkXgFoLB5LCLbaOYTCBvBCAzH4BEGClCmRjlfgME81cvz4ewZZI8RrYATk\nAvg1sOlNryPzDQ7ML1KunYKxEWqNGK9yrDAoB3hHw2QsJIJzrrwW7SOUUhRS0goOqySdAuRJZ4A2\nlUecrHoTMkhqTnPWhefTq2liL5hE0xQdxjqH6B/cR4SopjCTUQakwQdJb2w9Z73lD+lNbSaIQDA5\ny9MPoOw0tUXBpqufRy0LoKCjJa950yYOfEjDW4BnwzN39HjJX3+HAyMPo/IaPstQrgRbokOJoo80\nOcLkSJMTyoyy7GP6bcj6xH3LSy57FrE+wsBQB9/r0IgsQ4OG44+XrJ0oOeOUMTZOjaCDQwaBjFJu\nf/Bh8hVhCuEtLuuStRbxRYHtnMNulcLFUL8CPvbHh3jHHxxmsC5pdzs8uH0bXkC/LAhC0Ov3iaKI\nWq22evZ0SGyoyrN+YTg0M8vOR3ez98BBZucX6PcLvAPnfGVuVJZYRzVtWRoK6yiso/RUkLS1eKEo\nbWBlnyQ3hqXCcahtmO0F5nvQKQRzrf+mzl+j4xN4H1ZYtmXVWPOeYHJUyMB2kaG/oqPrCMEQgkHp\nCgk7Cnce27jUXhDYuHpu2fDmZzM7OMDcRMz233w3dmkRWv3K4sFJtG5wYGCQw40Bgvd8/zW/yQ/e\n/Q5GraNpFDIofBAIJ+jXmlz8tt9nSeUUSmHlSvPUOkbaSyz/+3+wrmeQKiYJEe60p5OPnIJQntIJ\n2jrw8j/7O4paExEbWiFgJk4i101K7+gJhfCelJgo8Zj9N1Ul2Mnw2B7NIw82eeWVm/nqwFClcvAl\nWHOt581vewD33G8wryxlUSJMibAZyuckBBoChCmRtiT4HFP2EEWPorXEiNrAGeuew9p0HWdvnWKk\nZhgdMEytq7Nu0wBDU5rhScHoZI0gJKX1HMozuissArxFh5L+8jw2z4hVzC1fv4a5v1Pwy8C58Op6\nm8//2z5qtWm6vS5BgXWOxXaLpF5DR/pnmMpeSDxQumrcAKHwQdLPCg4cnGbnjkfZvfsx5mYXmV1c\nIi8t/dKS5SWg8DZgbAV0GOvIS0dpHc5BURqK0gCSMoAB8lLSzSULLctC2/18oB5zPWnJUm8MIGSE\nFDGRisFJYl3JtU7ve5S8u8yRg/uRAR7d8QiP7d7O9L5H6MweojU/gwwlAksQuvLkQOIRiLBu9dyy\nON7hkU9/mXSpg6bFSCdH9rrEQZMHzaJQXHrd33PZB9/LfKo4eW6OM1wLEXsioOGqbTvyNQZVxI1/\n+xGkUQhjCS4gjcF5SzfqoWSfIRyJDzgvOO0FL+WS33s3eRojKOnieWjXNjrNJjVTY9P8NLv+4JU0\n9t2HDw4fYrpRQl2MUtBhVD7wM8linKXTV7znD9fwtg+uY+kGWdnXnQMviBY593dvRK5v07MlgoI4\nVkSyJIiSJJFE2hEJMNaT5Tm9/jTdzjKbJk9jTdxA2GnGRwT4DOksPgYVQzIQGJ2KccpQSyJU2uDA\n0jIaQyQdrrQkwdKbn626+upiXnb5iXz7rBRuAt4NWz9s+OP338sJp2k63S69TpexoWGkkDjrMMas\nCPLpqpUQBEFKclfRVywCKSO0jFBC0llus2vXbvbs3MvskXnmZheZm2+x0O2z3MsojKOwBlNCXhh6\n/ZzMlPStIbOGvCwpihLvLQ5DXljml5ZZauVPGLNPWrLMz04jlCKzFpQmSmtktqD0MLXpBBrDk0yt\n30zR77F161ZO3HwSa9etZ3jNGIOjQ3hf8Ya0sOjgCWWBF6ZKlo3AAVh+8DuMdOdRhaHmFC7ROCXQ\nMjDeLxhE4B7ax4BQRDn0lSB3NRKv0BHkHmxskDVLVPao7bwfQUIIgdzXUM+4gvCU8xG+TigVuXcI\nURC7Ftv/+gM8cv2nSJY7lBgS69n24T+lcWiGZW0J0oDrE2yOiQJSw7B1uLTNmJFMbl4+JllUxXSg\nEt6+5bsDvPzFm/nBxXX4BvBu2PSekme+5VsMPfdBunmG6mVEwRJ5gxaGWElioZEiqpAv4zBZTihj\nTlh/AdqP02ykDA41sGWEcYYyD2jqZJ2SxkANZwps6bn9oUeQOsYIh04iQplRdGax/QVEUfDrr3w7\nf/L203n3V8bI7wB2wpp3OT7+iX2ctDkwNTYFViKdRMoYrWOUinDOY8sCLRVSaKTU9AtPWTpc4ZBO\nYKzHWodCkaiIxcML7Ny2mx07H2P/7Cyzy232H5plablHbh3ISoPAh0CRW3qZoZ3n9EuPC5AVJf3c\nAAnWP7Ek5ZOWLHfdfhvelWghKlQsFGgNwXm0rEPQSBljVY+y7IOMUWIUL2KQKagU6z22u8hj2+9j\nz8P3ctdNN7J3e2c1WYaHW4xmBQUFhQokNjAc6pQXncFCcwBve/z0L/+cf3/z21DSYZQneEOEY+yl\nL+fcv/0H6ideTFRExGVJUhSkzqHw+M2bUa97E8f9xm8ziwIFWkg0GhtbRuws0W3fJteBOhp8TqYN\nevMUzbE15MHgI49TKbVzriB+2rNYps6pl13DjBxg6gT7eLLsVlhbIYdQweZzs5K3vGE9f/aNCbI7\ngAGIzoOL1j7M6b/1bYqBNt5DEiQ4g5clrsiIhUOHkiR4RL9LNr8MZg2nb/1VsnyAfmYwXuJ6jn63\noN8p0SiU9djgsSqwP+8xayFNUrQ0aOVRrqQzO0O5vETkHG96/RsQvdfz+jdtpP9FYB9M/mHgc184\nwuT6BbwoCKJESotUHucLStOnVqsdM0ZejRsHFO0yp+stJQKPQCqNMxYlJQONJlpKDh+Y5uGHtrNj\nzwF2H5zl4OHDHDh0hE5WkJlAbn1F2pU1lI7JjMUJsNaSGUvu/5tqHf/ilVdx5MB+ptZtQEgN3tPr\n9wgeTDZPWZbMzc7QbS3TWm7RWm4TEHR782w9cQuHDhyhHiU0BxN6vYypyUkGdMTiUszmjcCdEF2t\n8Eaircasm2BP2WFiJuGc33kr3+z9LfrHP0KZNjUgl4GARIqAQFNDcrg+wPFPvZAdOx4iKsDhESoQ\npMXNH2bTwb08tuMBhiJNCBZFxQSOwhC92DJYT5h6/os48JUvkmAIzUHaWY/WzAKjSHJXELRg60XP\n44dfvx4noRwYxonAuhPN48nyTxHWWmIdHRNI1XX950f50R0N/vyvD3PGC3O4Gk64ps3kO2/kgRvP\nx953BoNC0g0WW3qcd/hgSXSE8h5R9jE2J6lv5ORNV3DrT7+Crxt8ENiyZHgkZjnr44xHK40VgSxJ\nuPnue3j5uWeTaEkZKjvuUGSUC/PkoiRKGmzZcDzrJ3+P173+vfzzl2apXRWY+IPApz53hNddu5Z9\nj8VVH2elLGo2GxTmKAfM4T0INE6CQeBNSeR1RT1yVDbf3qC1RktFTdbxdUFhYH66w769R4iTiNGx\nISZGh2jUE4YG6yRpQg2QUiCDxAZDUhvGOvu/DtaV60mDjr/6r59mbnoaIQVCa6QU1NI6A40mUkBZ\n5mitiZUmiqKKVawUWZlh85IkSjiwfx8iCpx5xll0O91q1lzdzYUDH4S3Qvfbx/Ptp0xiBpvYvODc\nv/h9Hnz7+9j06pew8P2dtB++HaWqxpt3gnltSb1kzHsWI0F83Ens3bMPKywj1pMisdKTWoGQ0BcR\nVkDkHXUXWIwFg0+9gOXlNsP79oD32E2nkx/cRm66aFfn0AlbOPHEk+je/GWWU40Ojo6qUTpDLkqk\nr9PWns8/+iDRCDAP5569lm6RkkYxnmoK0YpqvqUax3boCN7wxgV+7WULqDeyCjHvLzZz5MsX4LuK\nBZeQWYlXGiFiCutQcQ1fbyBrI+hmg1Z5mO07f8iOpe3UU8vQyDDO1+jOe+6/ZweFrYGKiETGbz/r\nEsaVINQSWt0CHQ/hvCavxUidkgyMMzK5lk7o0zPf5LWv/Qa1qwJsgtk/k7z22rU8+miFbMVxjPce\n48IqodP7ikEQhEdJhdAVqiK8QynJUBITnEVJgRZUqpoIZFwBM8FLcm8og0XYgMBTH0gZHh9gfLhB\nEmmSJCLSmrgek5eev/vEt/77QccH9u/DOF9J5yDxLiCloiwMIkjqtSbeVdBwnmdkWcby0hLdXlb5\nnofAxNRahgZHWVhYIiCwDrqtkdUDfpIscuumOhe8680UvR73/PGHeN5f/zmLH/sG3V13UUMQnAdb\nUURe9v73cfnn/5WlKGLCBezO3aS+Qt96kSMOAYEjiAwnbFVC4MhjaEdgTziRqZe8gme9+hpmTUab\nPotHdkEMJBE9rXnucy/hsTtuRujKqEk4Q2IycAXCCyJvOO/CKaIZYBzmOoqyr8CvqN8IsSrQ4YLH\nB0cAjIG//5txXvtbx7H/Q2oVYj7uB3s56/duYOCUI2gtacQaLcLKoFc16eD6BT5bImR9hpNJzjjx\nmTzvzKuZ0FuY3b/A/NI0rljmvLPOpBYLgilxKO7avou2sYggGailKzoBBSHv4LIueW+J2YP7acg6\nNfVL3PztN5FfL2EfTPyB558+c4hNmw1pmv6c10vFIq94YpGqVGa8W0FApcIGwVK3T2YcxUoT2fgC\nscImTwnUEQwozaBOqEW6Eg3MDdP7Z7j3Jzu540eP7TlVbQAAIABJREFUcM+9e/nptv3s2naIuSPd\nJ4zZJy1Zev2SflZSmoBSMfX6IDpKSWuNVRkkrTVFUVAUVSe3Vq8TRVElwOA9LnjS5gBxvUHpA4vt\nDrffsYcwIWARItHmxb9yJbd//AsUcWDN8iJFEmi0eyi3WA0UKYHzFkHgpu/ditdNfJogS0tifaWm\nEiwN48hLTT8aoqcblAGUEygEifWkDrIjyyTTs9z5uS8w6iMaXjORF3SNYKBrqWP5yT/9K+Omz8Fo\ngHJkEiUCIthKlEKDDxlRtK0qwbbAvj0KESByj0tCSSmPqj+trsLBV/4o9/8k5aorj+MrAwOrEHP6\n8ozTX/wdNp59kMFmjUSritoTxQTrkD4Q+n3M0mFCJ2egtp6x2lYuPONXOOXEZzA6OsaajeOMrhnj\n9FO24KXHB81dhw6TCYHJS5yxRMIS6cCghsjn+H4Lmy3TnjlCSszy4XP5zvf+B+WX1eNnmM/Pcdym\nx5+xtZXT8VG7CiEedxI7Vj8ZwMealje08pxWmVMISY7HBIFzHh8MmkAqBbVY0awnpFFMohLq9TFq\neghTCBaPZOzefYgHfvrflHXc6ffJjaEoShbmF5mfW+Dg/oMcPnyY+fk5+v0+nU4Hay1SViZHR29U\nkiS4ELDW0u31OXxkhvmFRW77we1cdPFzyMthmAKmoZhYZHHQMGkcOkm543+8k1a9S61IyIuCnrcU\nCnzHwLe+x9df/EImCoeJBFZXSpkiVN36iVddw1M/8EGmXvxKpNYYPNKDFZJSCga7Mzz08fcSdj6I\nCCXLEcyowPpLnk85NESeGBK/BFGdC97y+2y5/BUYIhSg8GgUkVasP7EL24GT4cCBlHqSVqWXMav3\n79g5d1iZ75EC8HRzyXv+cA3v+Kt1LN4g4RTgnTBx9qOoWBLHEi0VmAr6Dd4TCYfIWpj+YaLYkq6N\n6fuIC8+9nGZjE2KqwfjmDQw26tTrEcJp2mnC7vkZTOlABKwrwWSoUJBKhygzys4yi0f2sn/vLuJI\nUyw9nZtv/C3sV6KVHSbwqc8eZvOJlhACURQRRRFQaSUf7ewLIVBao5RaHS3IvQel8ElMLiRzfcN8\n35GLhK4LZNKShZLC5BVLwpbgPVrEDMeCsUbEcAojdUWt1qBWT54wZp+0ZGnUYnww5KbCwLv9jF4v\nY2FxmYNH5ti5Zx8HDi1w8NAcB6aPsNRq0+n3MCZnudWmVWR0jCPIlNtuv5O1U1OsnZpi74HH6PVG\nVhGxeq3FaZe/gPlaJS4+4CJkHOg1BshO3cTFX7iezklbURhUltEUknY/p8ATGQ0qo5QJtbpiYnCQ\nfGCIKBMQJEFblO0TbAkikKqUyEc0TWC5Zok7cO5bf5eNl11OS9Ro2ohuImn7wMz99/HIlz+LyQ3o\naiTWFgVeRJx6SQHfBy6CuekJ1kxNIIVDr+wi1jmcrxpociWJBBWSSBDIlV3o1lsavPPtk/BKYBvU\n1rSIhUE4cB6Cz2jGFm+zCl4lEHpd2o/tohlqDI6N0urEPPMpr2C4voXR8Y044ZhoNHGyD7ng/r0H\nmGv1yUxJ4XqVSDgOsDRqmlgaUm9o2ILlfY/Rmp2hzJ7Ov335/1jdYSbeFfj05xaJ6pKiKLC2Spyq\n/2QhBJz1lfJoMEhnKTGkIvA/mXvzKLvO6k77eYcz3aFmlYaSLMmyPMmzjDHGboNtDAYCGHAIhIRm\nSggJNGOToZt0EwKZSCBAAoGEpBMgNhAwZvSAwTbGs/EoS7ZGS6pSjXc+0zv0H+damHTi5Ov0t8xZ\nq9ZV3aqrqnvq7PPud+/ffn6JisEHCC1RUoMULHT7rGSG1qCoKDvGkReWwlQDb9oW9NOCQZbjTDUq\nHQmIZPCU1+zTZ5M3VJOW1iCo7hhplhIEwXA10YDBBWG12VcJa9ZOY8wAryylExx4/AAPPbSDKNSs\nm1lPYQ0XPve5+OBmOGYPPACrL76Vbz/QwG4/nt49u1k1KJl3CeN5xvyRHnVgw8aNDB5+pMr+vMU4\nRyADFgIAQb1MyYTggc98mv7XPk+00kNYSyAEljHaTjOhNSrv4kJBKQWhC2joiHv/4Uomt51ILjqU\nOOqZxsmM/g3XMsKAuZFRzj/nWdx903Xkq6eQ/SOc+IwUvgd8Cu74jVHGJyc4fHiWIjfYokRHlRzf\nW4tQ6mi6IqQcstWqqVNjPY/vqcHxwC6IJ5fx3qHwOFtirCWQEAnIC0ukBGk6wFnF0s4HCdesRwUx\nth8w0dvI3T++jlZnjjz3CC2RJsfHgtneYZwYo6YkLoIg0qhQYsqU2AtCZfFOkrdTirQFSUIy9lyu\n/lqPF135t9QugvW7POedn3PDd6oL1rlqL+kALw1BqBF4EqEINQhXYnJBGNQQtsS7AJTDe4WUgsIL\nSheQ9g0aTxgqNKBltfk3tuKxuUrCXs2xy6cuHT+NkD11lDRprMFaW6VarrI+ELISUSol2bBhhjiO\nWFiYZ2WlxdJSi69f/U3m5xf5r7/1W5x+xll8/otXoVTAD2+5GeWfD+8EPgBb8zvRo39H8+xn0Nu6\niYEvmX71a1nWgjX9gq+/4c20fvAjhJA4Ac4LNNCZXsXz//JPmZ8YoVBVNaVmu0zPHqKZtYASWVr6\nm4/hP33yo2x52xvpR5LQOlwQEXqFDRyj84fh+hsYHQzorV7Fka1baMUWp9oMAsuG57+Y3gmnkoYN\nanaMbRc4gnuA42F3SzO7lNAbFjiUqjb6OI980qTjE+fxCfX20TxfBrTmNZ1AgALVNviwhcQiqGQn\nWngCLMYUFaZVKSg9ZbvD8sFdBNkCupeydeo4nrH5HMqOwhQGm5doNOHoGEtRRseUFBZ6JmcpS1lJ\nuxifIVSBMDna9qnJlHrZorPvUSLpaS1u45rrxuEFwJ1w6mklahj8Ukq8pCqElCXTUcyWiYAzto5w\n4QUzvO4t5/O291/Mr7zvQradsZFQ1gCJUgLvQctq9UdJSuHpWUs7z2mnGZ2ixCDJHBS+ogVX9Jin\nlrv8e23y9gEdqjHF0nt/zn/UKq9KHTxTkxMcf/zJzB48zKCfsuW4jSyvHGGk3kBrRaPeRAhBt9tB\nBwnfu/5Gwihg1dQEzjqWW21e9KIXsn/vHlatXsXBQ4/zjasTjj1xM2d+Yi+8BH7ttoPc9dhh7t4z\njdq5n2e85IU8fMt3CR/fx+qWx4WCPPJoE+O8Q2iPzwO6OmZqcgvl8kM4o1kKDUnhgZDQ5DihCDfN\nkNYSkjJASEk/cChvEU5RaksooedyglLgu47nvfhl3PPZRxm4LrGs0f3+bRwcLOGZZ+HIApc/t1fJ\nRC6DH96UkJUlVsDUqin6rS5eCGxRoKMQ+yRWWrXpdcMPiffgZCUd2bsn4PQTCtgFenwZvxwT6gQ5\nhH8k0pOFEWWWUqs18CXVGPPA0p89RD1apnnMCey5f5nnbX8xHXWQXjRLfXyaHTvuYP5Im5qL0Faj\nnCfLC7wxRHFEpAPGw4QECJTClCmmmGf+4CE6RcHhXSOwfQW+BKe9pURKgTUWISVCKpwzbJ6YZFxA\nlEM226GTeOyCZGpkE3GQ8vI3bGfhcIvPfexmysJUED4h0VSdey8l1ku80pTOY4wntQOEh1grpIBE\niUqL9hTHv3dl8cBzvPdneu/PGT73hFXe8cANw8/5Z1Z5LwD+QjyZJDE8jt20ge1nns5xmzdy+OA+\nanXNiScfy8hYjenpVdTjGiEhRZEyPz9LZgxf/uo1EIZc9LwLKdIWv/3ed7Jn1wOsWj3OgcOPs9Lr\nsf6YzRwzcwwN/yl6l54EvwLipXDm6Z/g2Gd6pIh4+MufxRYVxCDXqtKWOUNbCsKaIpSCseVFvvG+\n97L02G5yadAb1tAwktgD0mKDgEAEDO68hR/+zju4+XN/jjAlganm1LtrN2IHntxBKBKciomyFnf+\n3cfQSuFlUE0pdg+hbZfS14iiiNOf24dvVcFy040xzlZwDZ0ECOkq+Ll3eFtV8J44qg2/HYJvql6F\nNSWl9OzfGxxNxZqTK/hQkaiSwIeU3hKHmlB5UBLpLCOBpz6WEDdWky6mhJ0F9tx3PxOhI9uzl6ml\nMU5wW9GLfS6+6AVc9pormLUrLPsBaSlRMkZFDfo9y8J8l4Pzh+m3VxBFikeQ+oL20iEGPcvKysmw\nHbgbTju1ROKHchcIJUxGilEpCGWJl45YR5iup78k2HH/QR59ZJYf/OBGjiw9zKvesJ0glISRQQqL\n0p5AKgKhCaVGIyuYoJI4oXEyoOs8HQNzacni4P+dNuyfJ3QvobLIY/j4suG/j1rlee/3AU9Y5f3U\nsXHjpmGJ0LJu3SQz66ZptY5gTbVZNThy75mf6zE/36PXFbzspb/AJS98Cd1BwPiqLXzjm99n9dpp\nvnvtdWzavImlI3P0O0vMzu7n+hu/z3e+ejnd3xiDE0G/seSCl30evykn/fZ3WX1wH8K76gw46EQN\nXn3ll6hd/jLKMEHbjA1LPabKEhkHtFsLQ1cxRSRqaAtpkDPV67H58UOsHbRJjMVhSFcfy3G/+RG2\nf+ATBCJAiJJQeXTuqHkonMFOTzLQCuMNpXcYJ5jZXLBaGZiFwclw7x0RygPeo8KAKE6OmgXZ0hwt\nH/9z8g38JM0tjWHfnmGw7AQ53UZbT6mqkV7vPGCIAl2R+qWnFqtqyvOkrbgwIJ1rUy/nOeOSc7nv\n4C5KczfPuvjbXP78XRy35gZq8Xd40zvOJFzdZaU3oFUWpAYWyj678wUWvWVva5m0SAmEQ1lHiGLd\n9CSnH385/TVAC1Zrx/QqX9EzhaXmHWubDYS3lFJW1UkJWiQ0otUceGCee27ZQb/VI+12sOEh3vP7\nv8zaY0dRXiCtRTwBPhegVWXSpKUg1BIlPYEAKT0yUJTy/7A+/anj/8vKcr0Q4i4hxJuHzz2VVd7B\nJ732X7TKO/GUU/j8F77AYNCnSDM6KyuMj4xi84JOu09aWG667Xb27t1DFCaccdpp3HjjD5g9cIC8\n32XzpmnWrJEENkeXPfY+fD+dxTkeuOcO9u1+lCiUdFqea796BcUnQ5iD5A+6XPrZ/TSLrNrQIdAI\n0IrYelak5+znvIhOntMNBFaULCUOU2jivPJcN2HEshAsBzWSoEkeCvLYkUlPVyuywCJ1ybQo2PvQ\nD+n7rPJWwVIIS5c+SoesS9ZSFCV9Z8i8r0yQLuxVPpiXwp23J1irsVlROQ4oQX1iZDgGrJFaUhTl\nUdLN0T/osMz6xJiwc459ezWcAOyC2syAfGWZWk2hdISzgjAWjNUThIwxQLd9kLpfRizsxdOhGM9I\nI8NCP+fhIuVZb7uTbffuZdvXHuaiex/mjcVuXlS/mj/5k0d4yx/uYs1xPdoDw4Ful6VGydLMHKe/\nbj9rXnArfdMmsI5Bv4NWgtn9j/PYrjqcCdwNp55msdqDL4cuypYkCUjCClBROocxGaNNyaU/dzYX\nXnIGW04+jrVbtjK59iTGNqzijz/2P0iSFMWwMuctzpQIa4i0JAkUgYBAV4NikZLUdESk/9+Ujp/t\nvT8TuAz4dSHEBU/+4tCD5al0M//H1977nvcRx02u+srV7N73OAsLSzz6yG5S41nqdNizdz/9fo8L\nL34OP/eyF/PAw/dy0rZNXHv11ayZbDIxorBFm97yMto71q4aIRAlY8069VpIFFWnquhv4Pabfxn3\nFQH/CKPfW+DkDy1TElB4xe5YMioSEldy1Wuv4Au/9jpiN6B0HGX2J6UmTKsusUVwwXvezks/8Rd0\nJ1ZT9zFGKLAB0kiSQlI7fIgfvev1tL7+JSJrCJSk1xjluNf/BiqUpELAzn2ouALllSbHK8Ppz8uP\n7ldu/kFMKH1FfjcOTSXzQMvqbmkcYaABhxfVx1GM2hNNu2Gw7H/SypLrA3TwLByeJXI9lM/IrEeK\nDO8HPH5wD1meMntgB4uP3o/PV5ChoCY0xeEFNq07gck1Ar4ITAA94Hrgg8CJcPrnl/jt37qXX/vg\nA7zk1St8/ONHuOojR/jFlSUuay1z1i99n3Z3iaDo0jp4mI1rjmFxYcPRVGzbaTnSeeo6IFYOUxaU\nxoI1NGsR8UiEr3see+RuHtt3JwOzxHKny6G5FnnRQAermFs5wJve8ja89kSBJNSCIFBDFYDFGoPy\ngkgG5GnBcqvDcmuFXv+pO/j/rg2+9352+LgghPgqVVr1H7LK+5XX/jxpnjM7N0vcrFFvjLBzx6Pc\ndccddHsDtp9+FhunVrPUWmbHzgdpjGjWNWfY+LpXMX/kcbRWtNoVCURIgXMlzWYDay2DQYq1njAI\nCMOA2QObuH/sBZzxN9+Gt8DYd7okNqW3eZqf+5OPcPcb3km3O8cqmyC9xsrhrD6OZilIVYmXCict\n9bIaGjoSadxEk+DQLFqFeJURiAEFTVxZkviicuzCMrAJJ7/stehzX0b7i19gqmyxUpeoQuHXTzGY\nfZzJGE45JztaMr71UzGlK5msadpFl4nV68EljM9M89iPdxAN53dKwLuhRaDzIJ48EFdVyfbu0XAc\nsBfWrrP8w90/4MyZbUyPKLTNWVg6iNcBVoeEgaNbFHg8S+15jswdYsNZ28hshvMlW2ZmaEbDqPxF\n+P6RdYzXBeOTAzY0VhB/CpwGZ71mmbPOp/KNu5kqaf86nLVzwLdqh7BLktBr0vYY/c4WOOsR+Cc4\n8U0lmoBIVWVw7as5/aAmqI8q1GhEHnlKA+3FDkW/JGk2EFJxJN7Lls2bqY8ucfkvvJUvfOnzmE5B\nXlikV8OUU1XnyFZ+pGMjI6hAoSOJQrO00v6/DxYhRA1Q3vuuEKIOXAr8z+qt/99b5XUGKbkpmdm0\niebICLfddjsXXfZ8vvi3n2PLlpPZtGEVM2vWsmrjMRgd8PBdd3Lqtq0c2LeLuSOP0+1kjI1OEsaC\nohigdUiel0RRTLM5VtlRCIG1EOqYfjeBNnAStHZHaK2ws8uk3Yz66BiyG1IiMAqcKymtZK4e4nKw\nykAgiEUAEm7/+KeIknGiwRy5NMRWU0pB30MiJda7yuLBOUDRcB0e/v7XmcEQm5zYCla2HIOfKzlu\n+7HMf/sIp53XR9/lYWtVMp47FBDFkhNO2EDPWRyKtJ8hvCeqxxTdDIxFBgpjHUqrnzJBfQJACDAY\nCPbNajadbgj+BD70hRZv/oWHMG3Liy84j7iEfpZjsxW2bt5Ez6QMBo6DRxbQWUlDeKJQk9mSsbpk\nMMQTAdx+/Qn058cQwrFmZsAFVzzCtnc9jvgI8LfAq8H+PfRcyGhcwF/CM15+gK9/LGa1VvT2KWZO\nHxluvkDIAIMgTGrg0mp1FCUqUgR1QTQqK5JtLlmnJslMielneC3pt+boLN3P9ESTVdOjOFMVRKyr\nmrlhEKO0BCzKR+SmZJBmhEoRyqGj21Mc/56VZTXw1WFerIHPe++vFULcxX/AKu+RHTspJMwvLtLr\ndvjN33wfY1OTXHTZxWzadCxL80sstFusCjbS6xyhyFpc/+1v0u138R7GJxr0+y2mm+sAh1IVuqfI\nC9JBRpIkFRTBe0pj2LRtb3XX3g61xkXMbdasenQPt//6r7MqtVgCHBVD1zuBP+eZvOiNr+Uf3/s/\nseYQVtawVlJoy0g5IEn7ZD5jEGqUy4nPv5STX/RSHvjUJ3B7Hq3epAAvPamUNHbvZ/nwJ4mtpJ1Y\nXC45713v4ZaP/xYTzjO+YYg9OgUe2RmiBAgpyIolpA8Y06NkuaPjc0469STuvvNuAjTCC5ynciDQ\n6qcsOLyv0EbWWj7yh+N89J8WUJfAiZR85soV3vJLq/nKTXcxMzlJq7fEZaechCwKXFlQSk230+OE\nmQlqUlOoOjrPqSddkloA5NVb9BZpS5ABi4ebfO2vz+HWmeN51iseZfTNfR67bx0PfegYJtf1eM07\nfgDPgefu6vPdr0c8tHeO0cxxrizgMLAOktpxvPyKS3jsntvoze1DBQ4ResK6QicG3XBI7dFa4gpB\nvdkgLwqMB5uWmO4hZqbPw5qYxkhEZ5AhlUdphXMl3oCUIFCEUUh/0CWuJ2jh+LcE+P9msHjv9wJn\n/AvPLwOX/Cuv+RDwoaf6f2UQc8fNtzOxaooLLjiPhx98kPvu/zHHHruZNZNrKPOUqdVT3HHzjezf\nt48iy/FAPaiBqCbcVBTQ68+T1EYoCks6yDDG0m736PVzwjBEKUVWOCZWz8HdwHvgvquXOfFN72LX\n+9/PWFYBELzK8U4TmAFaxyy2VwhVQBqCKErUqVuRU2vh+uvIQw3OVoZC3lLqGsed+2zmm6OMbd7C\n7IH91HJPNwRkSjNXFJFAZg4vDaWt01xcoPONq5joOOZdydxDCVwO/CVs++9VGjRSjxgZmUZ5jyks\nVpdsGV/DwX6HTZtmOHhgFmMDFAFa5hSlrWiN3uG1HMpeqhvJDd+N+e3aOB+6fmUYMIZP//0RXv+a\nCXavLGMCx32HFqmvDxkM2tzdyjjcanHathKXrieoge96Zk7dS6P5E41aLdbkNY8znsJItPIcOjzC\nNZ87Des9cXOMPBuwuGOCw5ePsm57m/BKx+9+4sc8ev80ux4MqTfnYBZYB62VJgjLxmM38uDsQXIM\ntUQhopKoWcNKgS1LQhVgwwIvC0TgCYQiMAHWzzExenLlP+NB1SzT9Yj1G6cYm6wxPjHK1q0n8s1v\nPMztt9zHmpk6pjuKCVrIfyMcnja5y9333kevnxIkfa768td4+SteRj/NGZ+Y5JZbbuWcc87hmmu+\njRn0CMKQOImrtEYqwjDElqrC5siA/Y8fRqkApSrKYVyvsbzcotudQ2uNdQXTa+fhHmA7dD+c8siV\nH2TMVENOTkLiQpaiBjYYEDjHyP79/OOvvZUxK4jCJqe/4j9TjI6w96FdRIcWKYOcTIBFoso+t/3Z\nn6In1hIs7MUEPZwRhCLG+ia90BBah5EOjWSAQ5U95u7/EdI7JscneOjHS5TbINgLmyYM46sMxhYs\nLRypuvbGsm7NaqQyjOmApDbG8vwc3gb0B0VVCg00uflJufSo+9YwNfv6V6spxA9f30JdAic4w7dv\nmOf66xK++Z0G99wwx6HZIzhtaas6A205NOVoRrtY2zzEua9+jOM3zMJ7gAVgGg4s7qa1MkLIKKNB\nE4xCxpocixGShVabTn+Zbt/wxX8c4d3/rQ0vgdGPDTj70n2c/bx9cC4VG+1UWJztIMwSq9ZMoEfr\nZL1ljIDMC9p5n6BW2ZSLJ+zUpQDr8dLhRBeJohkehzU57/yvr+RLX/0iKnCYcomF9jJL7UWCsMZf\nfPJvOHRwkfe+8420TBtrBBOTE095zT5twbL/4GFO2raNpNHgPz37Cqxz7Hjku+RpyaYN6/nSVV8e\n6p0ERVGBBSq/lpKyP6CfZ9QadXqdFcKoQZrmmDRlYWEe51ylXBWSLC8Yn1gkPmyhCZ0oov9Yn03G\n0RkSMJ3wLIw3uPijH+Huv/l78u99j1JmTArJ4VhhEFz/O+9jevt25lu7WKdDnAgQOJwQuACarouY\nH+BUSZDX8IS4c5/NST//ZnZ9+o8IH36Qri5JCo0vLZkfkKgAtCYxcMRKdu6MOeUZGfwIztxesjS/\nlVNPOAYloR4lrKysYJxlsNAhieuce+7pfO+G2wl1Y8jasgSyQncYPM49OS2zKC351jVNvPf8wfVt\n1Mth9DPwilemvOKNKa0/hutuSPjOd2vMPZpVPQ3Z4oyXzXLGyQP0nwGfBN4A7iG46d4xygKc6NFd\nSlk4uIetq2bY7xexNUcwWUdFCR2h2LMwz4//IWfd5oSX7k2pPQJcC/wRwwQeeD2UDwjkYIl9B/ex\nau04jz0yS2o1ohQIQpz1WO+wpatsPLSi189ROqRed2w6ZhPOKj79qXey6/HrICgJGnXGag3aKwaT\nhzz2+C6IdrN58zau/Kdv8dKfuxjbMizMLzzlNfu0Bcvmzcex6djj+MEPbmLXrj0cOrifN7359cwf\nPsjBg4fp9QbUajWEE3jvKItKZJkPUtIsY2LVFK12h7yfY0yXPDeUZU4QKJCCNK/GTQvjOPaEfpWC\nbYfHd4+BCekHBS61EFQkkXpf0M4Ctp33bO649VYK59ClZeANIZLJWOLu2MF4EhO4Grku8FWXBm/l\n0B7WokqNDTLwOcefdCIDp2nFNdY4KEVJzQUELmMgNNrHGCcIuxlJPeb+e2NOeXYGt8Bp23Nuulai\ntacoLe12i6mRUZbSAVGUMNoYBee58DnP5LYf3oezIJQj1iGFs5giP7qiVPsWT71eJ8syvnF1naKQ\nvPfLbdb3HHwJ+FUYa8EVr0y54nXpT7pmUJWGX0FV2rkHdg3qfO8fxpjdG9Dwgr5xyCxlLLdMFp7l\nUUltSx0/WYBsMaYjRjckONfgpgc3cftjivXTbU44f5mTf6nD5tEB8i4YnKXY+bVR5CBnVEmSmuaM\ns07j4JG9lC6jl3uK1IFzhKbSx/VbfWQQYsqS0lhWrRrn7b/2KpKpOerRWpRSzC5k2EmPEBqHQ+mU\na777+7zqhZ9F2HE+9ekr+dU3vIKefeoO/tM2VvzG1/wi27adzvzSChMT40QBPH7gMXqdNuEw5XLO\n0WiMHL07SqEYDFKyvCAKQwaDPlJIzFC6jqgmK40x1Ot1lBIMen1e87oHuPjGx2AErj/5BB7+L4oJ\nHEvG8+K3v42rP/ZJGq6gkCHgoXQYbQiygAcTSyIcawoF0lHzilAYBkpy4e98CJNIvvG7/4NmaQid\nJFOeQARob+mHIcujE0wsrSBsjhEdwmPPpnnGWvZcdT9n/tUH2fHWt6JdyGyQsuryNh94yRx8GB78\naMhff+z5HHtMRBhFdFZaJLUGY9PrWb9hPaOjo8xsmiEMY3733b/Hjgf3Y6xDGI8Rkp6zFMPu/xM+\nN9ZVe40nhsiE92w7LefSywa84MUZ69q2CpxrgO6T/mBbgd+D2amAG78xwe6HIxCCIocsLem2NeX8\ngHX1E5HJEv2NA0bGFKNjIS6M6PWzIQ9MYlXdG69AAAAgAElEQVSAqI/gozHK3KGdptEYo3fg+6jZ\n43hsd59OnGCdQGcpvZUWJuwRNUoaY5pIg1YS7wzCKHJT4IHCVnT98bokSSAZaVKWJQttS6evMJnF\nFA4h4ZjjGjSbjjwNmJzaxMYNZ3H3D+9g/0MrXHvzXf/qWPHTtrJMTa7noQceZWbDDDt3PkAoJcZU\n/hte5Eip0FpVnhzD6TlrLVIElV9HUZAkCQC+KCpFrvgJobLf7xOFIYHWrN+4fHRz/+MH6rTWhtT3\nLVITEYe3HM+mN/08fOIfWFGGUnicgdiVdLRFO41TAi8EgdDgDU4ISkLa9YCwMUKuYjRdMpFSzyul\ndyFLktxw7AFLu9YnIqYbreasX3kXbV3gv3kv4tB+IKCjAnLh2HVrCB8G7oITtxYce/xqLr7kOaxZ\nt56xyQmK0hAlSeVtg6Pf6xCHEX/813/FR/7gz7j+698GW40MJ0kIeYF1DuMcQvhKWf1k703v2fFg\njYfuj/noHztOOa3g0ssyzv1fGRPjEVpXqWavHXDrtQ32PKywpsAMqrEKm3tMryQsJRbNzsV9HHda\nyOSYpjEeUR+JSAvDxESdsvRYIfFC0xcWIUpqtREGeYfMKJbNcaws9ch9jCur94hwiFpCnqbYzGLa\njjgWxEGlpiiyEqkD+oMUoRNsKakFHq0dPi8pXJUmZ2VJZ9HjixDrc2pNR7czIKmN0tk3y9yRO1g3\nM8m+nfP/wpX6k+NpW1le+rzLWbt2DUEIxvUqC2c81lX+IEEQUpYFOviJfMNaC15ijEVK+VMOxUVR\nVCmYqCzzvIdQV6rVP//M1cQzFnbCjx6Z4Z7vT7P8JctYCwpfJ5gKGZubpcwMRnpyX9JKRlh97nbu\nvuWHhD7nGFdDSEikJ7aG0BsWG6NkgWKk6/G+wGlH1yRYGTGqDRQ9MqXRRR8jI/IwQJ39EqYamn03\nXU2nVKwWA+ypz6QxNs59t13LB2/YzeYrSvgULG35M8bHn38U4O09dPMBcZwM51Ic3lisC7BW8rb/\n/BsceHQvohSk3qBViFOC5UGXyglvCFMXFYD9iSIAgJYQRRFKKV71qitwxQCNxBkDNsPmK8wd3kOW\nLZOImDIr8c5VJdtCspI7Bt5xxgWKVdMwMd1AR4rSSsqyKsyU1pOVnpYR2KCJsCGFz3C2QZm18WmN\nB37ocE4OrRIt1jpK0yIdLBOEENckoTaEga8c3KygdApkwNTEWmy6j9FxRzISEgjJoIReJsjSEukV\nQnlUoECESJlgvKpgj96zedXJfPj3P/uzt7Icf/yxHDq0j9HxOrjyJ4204WxGnmdH+yZHZ7ClJMvy\nqrNLUI3aGovW+igrt1LiSryzFGWJECXaVJt7LoRn/dwhnvXiQ+TvlDx6a5252xXZnQ0Ot0JcUQWf\nCSTHXHApU5c+k9vuuJt6WYkdE6cogKI+QmBgui9pBTlWByinKIXm9Le9HbVpHbf83gdZ2++jhcfL\nkCw0NG1OfscNHBEpbfoEhPSVxdc18wuzSCe4/66YzeeXcAuMnLWbHI+SQ0SPFyRBnSIrSZKEIu2C\ns3T7HUbiOn/x2b/k5ZdehjEFEVAUOU7BSBLTTzNsZdRZBY53PFkbK4TgzDPO5LTTTsNaEFGI8Bk6\nEDhbwdOnZ+osLc6xNDtPPhhGnLUUtlJvT0ytohbPEscpSuZoIsI4xieOtCjRvmqcRs5TWENR5sOV\nsIe1faJmQqELVBEjpMF4j1EKrUZo6Igi79NpLxMnJeOjNZK6xliFN5qRkVWMjaxnMT2ECkqCwDCR\nNBlTIYWVdHsZHkfpA4pSY8oAK0KEFRhnyAeOPenep7xmn7ZgWV5cBm85fOhxRhpNVKCJ6rUKVOAd\n1prhNGUFH3hiFl8HiizL8EMvxCiskef5T600YRgiUBRlinOez191Cq98aAf1nbbKx98B0QHHKc/r\ncsozgfe14AxYmdcc+XHEkfsC9j3wHQYPe5oWfBmiFVhhGTvvXLa87IXc97mrWNrxCMpZhJfgSzIR\nML5mE4OySd9BqgvwCVoKEhdC7rBxB2cddRVSCkdaBqR33EZLgiKn1doG598GV4J+930YP8B7gXSg\nZYCXCXEoydMcIQKipIJO9Dor3HT9tcxs3sTuhx7Fe4kRFm88sQpJmmMsdlM8Fmuqjf9RAaYUvO4N\nb0SKAGQ1tWqdRRNA3ifQkrJWDVKtW7+FVdMbyYcMskG7i40jwloNpSxR0saHKaXPCZxDlB4Rh2gJ\nxoEQBi01eVY5IDgjsaKPkJDlHcLAY/oJVmYI3STwDmSASAKmJiRr18Y0xzK89VBIotooG084k8Nz\nbRYOZkSxxBYZRSpoywilLEk9YVIleAEuqNEZOFqLGUXhaDQnWVycRyHIy6fmhj1tweJdSRSGjI+v\nR1iLwdNaWj5K9piamkIpRRzFDAYDlFLDgPFEUXL0uSeOsiyPKm7zPK+UulikFFzz1dV86+ur2HZK\nm2c+q8XpvzrHGjeoSpd3AV8AHoTxTYbx7YYTt8OF72zBmY/w3Ldq9t0bcc+nmyzepdh9171su+gS\noolxysJhY08pHZFVNL3g6vf/Dk57Vrk+XoQYEbHsHK7eoGkzhC8JhCS2nkIICpehvEKXBSqAe64T\nvO5jVCgjfy/aKQIZVuTENGPHj3/AXXfdy9LCEvPzC3jriYOAQb9PEQSU3rF+y/Hs3rWvKidLSWlK\nSucYadToD3pYX4AYuq15j0KydfMJLC91ybMKnD1wbXQYUNoaQoG0OZoaNi8IE4kKq7S4OTIOukYu\nLN6meJfgbYAzFoNFBAZhJFpWl5pwDmkdysmKI2BTpHC4wuF9QJRYbNuiVYT1EAYGKTyr1xSsWV8i\nVEpuUoJag16/oLewQrt7K42RkA3HriOUY8wfGWCyhPlly/SqCCFyalEdpQKkDpBJQBaCKT3ddhsp\nBZnxZN2fUcjem694FUI84WoryJ1BhkHFwnLVStJutxn0858KHqnAmMqMM0kSvJMYY4iiiLIsaTab\ntNvtKkWzBcbkBEGAtZWzlxaSUCpWr+uxaethjj2+x3Fbu2yY6aN2UhUC7ho+PghsAi6C3n8TfO78\nGZLeKHkhCCXkaoARgpAaVpQIb8ltgJISrwyZtwgZc/K73o8cq3P3Jz+GmN2JNBUveV5Dx5Yo5RmQ\nUYgQFQR86q59TD7DwrVw1Y9eyZ5HFfUwrrCtS0t0OznZoMRbz+jEOIU1pL0+gywjzwvm5w4xMTrJ\nodkOIIY9p2psNYoiEJ52L8UMXQy891zx0pdzwfkXIrxkeaVDu9+p5macxbgS7w2uNPiygkl4ByiJ\n1tVNzCiDNwb8I9RGHyROPJE2BEFEEGmM01ivSFNHbyDpppJunmJdgZCSXmrIBhE7d1rCYj06tDhC\nIjFg7cwKx26WFCan8BGprXFwto1dLigygWfAtjNmoBawcjhlZGKaB3fuIAg09VgwlgQ0mwG1RoSS\nMUWqWOl7lldyTKmw2tPpG3w2xg3XXPezt2fRsprmQyhyX5kX+dJUJ384Cz46MkKtVpU7s3RQ8cLw\njI6OEsch1tvKnsCVlJlBByGDIqfwllCHWO/RYYgtLc2kwVJ7iVAHWGHZs1ux59G1XH+NJYxDRCCQ\nwaOcui3nrBcLjnt3n3UzffRO4O3QuNJzwe91ue6/BNV4rB/aYtfGEGuOobu0SKPfQeAw0iC9RziJ\nkQXZcsp46Oh0lxm1Fqcs2laQvUIKjIfSB5QCIlOy8+4a553fhR9Cc2IfRb6FXruNcJ7SKOq1UQaD\nJVKT0T20j1o95nkXX8izL7iQ5SMrXH/9jTz84CP0UiokLpXuKVAaW5SEYcD0+Ci9QUY/TUForvn2\nd7j5h7fy6le+ko0bZ5ia3szywhLtdpdQKjLbwwuFiuKjujMAqTTS99DCUQqDZw15cRjPEXRT4ooA\nJxx4hXE51ni8U+RZgRv6zxjv8LlC4Aio4VWJR6IVNBLH1i2WKEnxeVZBDhFs2bqRB2/cTadt8Fqw\n0s5oZhoVCoJmwpaTzubh++7EFhKblWRFwIiBQDjKQURrJaOXVuPXSE8tqXHauS/ihmuu+9ev2f//\nw+JfPoSAWi2p3GZ9Nf0sZWVBgBTkWT6EF1R3vrHxUaSUQy5uTqvVwXtPkjSIknCYhlWcL6UkeVkM\n4XOVfL3b7xOEIcZaojAEJ5BUq9JgkNEdZJR2gn5rlNtvMYRaMjYecf45R3j1Zx6CZ8MZ97Z55FlN\njtyk8KFG25LcGLa/9fVkSvC9//4BJtopReCRTqKdAAcP/u1fYlXBeJGhbERXOQpvaQnJnDT0naUW\nRoQefCD58V0x5z27C7fA1JseZ2VpgiRJmFmzlsIpup0uI6OWpBzFWUe7dYTvfusGvvSVb5KlDmNB\nyQihFUEUV2ghOwTxAWVRYMqSRr1OGIT0sxJTGtI05XP/628YbTR43gtexOmnnc7UZJ10kLK47MnL\nEuMV3hrw8ii8W4gQrEUGEuFriHwjTpTkgy6BhsLmQw9RKF2A9Z4wivFe0Or2K4QSI7QWGxWjTUmS\nWp1mM2DzRk+tcQQpFKvqUwxKgcwdvXyZ488eJ+trhI5IGoLO4YzSF8zunaMxso64NobNc7KiRCqJ\n8wWRgEEvo93y9AoPOiQeVYyNn85A/Iza5L3jl18LDMkkQ2l5mqYYa8nKglqtRr/fRytxFLr2xH6m\ncrIV5EUBErI0I8tzSmNoNCvARRgllFmGc46iKJBaUZghtb+sKmhaVD9/cXERpGBsYpxYBZiyQAgI\nw4B6GPGr77yT06+dh/tg8Q9D/vE5GymMQ2ERss7S1BrOOO9sHv7m1cRliUQcRYwa7wmEJ8VjpCR3\nnr60LEvBEek5YAcUShBYCKzAYTnlTMdX/mgeXgmdOxv83SffwK4dj/D4/gP0Cz0sk2uEDKryrxRo\n78m8QwnJEE6J9/Zo2T3r91HD6peUErzDA1GSgNSkaVndmBB4BCqQxHHECy6+hDPPOJM4qrHUarG8\n0iMrq+lNay0/oSsYvDR4W6BdhjH7UeECzi8gYg8iQ9gIKzSDwpLmIb1OQVpICp+T9mt0W9MoBUom\nSAlapZxy8hKrpg5TiyZQQUJRwKCAQWmGr9N0uoooHsGaSWyRs3SkRW/gCMYS9u95jEYYEYWaMBQo\n5ygGhqKoMbAeWWsQjoyz5eTL8UGdv3jPq3/20jApJWVZMBjkqDCsGo5SVoGgFGmaEkURtTAYlpJz\ntKgqMtiK2i68rwatwoAkDCitxSHo9/t0O20Cr4lrNcIoYpDnhHFEv9dDC0WRDoiimM7yCiMjo0Sx\nRlAR3IULkNLispK+9Vz1+TM44QPXEZ/nmXqo4Kx3L3P7H41jgLj0TM4f4eC3v85IWSCtpKgpcmso\nlEaXkCMqHI/09ITDSc2cKNhvcpyK0FbgBBRKYETJ/Ts1/U1QX4GR5R737PoGjzykccahwwAd1smd\nRXo3hFBWQamReOdQApy35G7IUxaKkdFxikGf4ijVsoLLlVmODD2NWFEaT1o6UmNIRELeGnD1N6/j\n6qu/xYte+ALOOvscNmwcp5emLK2skKZp1QQWCuEMqBJBgFAC7U4iy8bp92JCkyLDBYSsmoSFE9VY\ntNdYIOtPYdIxkoZEuQCtE6RyOJ9RqwXUaiGxhiC21JOIJA0Zc56+SSgjTYxgfqGgrPVIrcE2AwgN\nziuQMaVVkFXN7Ir8GRMENaLQUuhJNm+5mCRuYJ7kTvAvHU9bsAzyDIEiShqUeYqWlUGm9x4toVlP\nKIsCaytXqDgOSdMUKTVCSIw1mDInCKpSsfcCKRTGWkZqdbSQZM7QH/RIsz5FWTAy2iQYdveaIyP0\nWm20kKyeWkW3u4xSgjTtE+iYwnmMdZT5gMHeGv/01RN4zWcegVfDM+9d4r6bJlDt05h74CHGvUeU\nIIXHaEfPh7zgIx+nVxi+8f73osuM0ngGwtPVmt15RjvylFoivMXIJ4ATlUVcVliuvaHG5a8awN/D\nJZe02PnAGsIQvBA4Y6sigrFVzm8KtNYY6RGigheiFFqBesK+WwjGonG6nS5ZkSGQWFF1WmxRUChN\nGIZo5YmNIi0qiUqv30dKxde++S2u+urXuOiii7jkoos4bsM6Bv0+S8vL9HKD1VVzzyuPkxUkfLTe\nwFGn092Dtp4wSCl8ZcOd5oJBUafbGyWOxmgkIUr6yjvUC6S05ANDEDgivUQkJRqP0CmiDs5LolLS\nTx29xNMYh3ZhkURIbXGDlCioUYu3Muj1cAoin1XWeC4ho0mjMcHMmmPRYQPnqv3YUx1PW7CAq3oo\nxoLzqDhAejDeEKihz6DSZNngaFOysnk2R6mVWmmEsBSFQQqFFArhLVFQ7WGKLEdrxcT4RLXxdtX3\npmnG8uISjThh9fQ0rZUVhLQEUUwQVsNSxnms9wRhiBCCK68c56yPjnLiK9qod8OLP7SPDzyvYFR4\nRBigvEQZQS48RRBzYNdeHnxsF1luCJSjl0Q87kpmBylZEpDh8F4iPVUxQAiccENrc89Xvtzg8ncM\n4NVw2e0t/urj6ygLy7ACi7ce4Ssv+Qq2V51PpSAIgiFA/CfKh9I4olpA3KzhBlCkWYUFompIWmtJ\n06ocHwQBgQ5o9zOyNEOFlUIgDEN+cNP3ufGG6znz9DO49NJL2TAzgxWKg7OHGeQ5Ell1x5XHupzG\n6FpGRyfIixXa7cMszB9mudPCyZjxyS3Um3XCMEI6T5kboriGdFTq8kjjTadyr9YGIRyu1HhfIoMQ\nHSmMlSQxDPKSwFkGLoVS4SjJrCVsHEM01kA4SWELnPMELiJRqwhMn8RHFIs94g2r8P5ndJ5F6xBT\nVmViFWjyQTrE94AZiiifKGs658jz/GiXP88L4rgicXig3xsQhglZ1iMMA9I0HV4khrIsiEebCCUp\nh2Y1zhkmJidoREnlZ2gdznuy/gDpNcZ5cJY8z0mS6nuCsMaf/vnxfPzDdxGd49mwu+Sy9/a4/o8m\nKJ0hKiURssq1u4vc+ZmPUChBQxl2Ws+86bOkPIO6rtynXPXbO+ErhsA/s1y45eaI2T+WrA0dEzsc\n289e4dZbRnDGEGiNlGooganS1ypAqpuIUmr4u8c/scyWljTLUEoR1WvUopjW8gpQBUsgBErJ6oyW\nJU57RhoJWmm62YDM5lQB6YjDgId3PMD9D9zD6tVr+NVfeStrpyYJ45j5pUVWem2kDCm8RyqJNxFR\nXGcyWkNz/JTqpmUMpW9gsVhbIp1HNgTSC5yzBHgyLUlTiSDGe4t3Q+yvKXDG4L3GeYlWIUpapDco\nryhLiUaSW0tRhsA4QRgiw5AoCpgMIm79/jW0Fh4jUobpdRu4cP3bUeJnNFicrwR9zjocDj+Ukhtv\nwFd7mmy4QX+yzYAQCqQfVr+gLAvqjVECHYMQWFMepa9HUURRKJI4prQFSmo6aQ+lNGEQ8r/bO/dg\n27KqvP/GnHOttR/ncV/ndt++TdONQKCB5qUgKoIiLYLVAoqvpIpQmpSSEk2qjKKxYsVUtBBjUvGV\nSLQaIiooD01EBQVpVFqFbmho+qnd9O37vvc8995rrfkY+WOutc+5t7svtwj0MVVnVJ06++x9ztlz\n773GnGN84xvfqNsG24V0MSS0C1mlYzI75+agwqhYon54P+9895N4w2/cD98DN912jpvfV3L6PsuB\nIjI2BYKQdJlIw4l6wvkSkg7whSHisZqQZLABogVveFQdxJSE9753gTe+fgNuhle9boOPfniBonII\ngkgmbMaO32WMRTXhnJuPa+hZD95nWVRTOGKHEFprWDl8mPWNdeq6prL5dEI1Ey29ok4ZDUuKwSJb\nky2atu42MIuIpSwLzp47y3/5z2/lwKHDvPq1r+aKK6/k4MGDbM1qzm9uMmtajGmIYrDVAuMqK/23\nKZLciBgTbVMT6ibvfMFjxNJON6nsCD9ZJsVVAhM0KMa0JIkYJ6SYiMFCEIhNbgu3JVYskgRmIF6J\ntCQ85ahieTDmrls/Sjh/jEXbYN2YQ4efiaQcyl/Kds1ZfMq7uTih9S0hCEEsgkXV4+sG5xzW9kLh\ngrGWEBSViI8NYDHFmPWtLRaXwJUFdEXNGGMe7Fk4Jr5BCWx1mmRXX3UUUnbQJobcKy85/AKPWEsI\nMfPNyI81W1MYDnjP71/Fy//nCa6+YYr73/Csr5nxu/cts2YlD9MR8NRYHGYwyBuBJiSBaEEyoBJJ\nRb42jCrMBSZ2WFLe/a4Rb3znBjwLXvzWTfYfCswmuVVau8a4yhqKqiJ6T6u5MFqWZab/qOBDzLmE\n5H4bRUAsXgSvSrm0jJYVs60tnAgDY7EpktTQhkAKAVcWHByPmTrLrG0xVghENAlGla04ZXr6QX71\nbb/E0uIS3/7a7+Kaa67lSVdcST1rODddo40GayqmWpMsWW0lCgYobEk1iqABbRo0zSjHlhgC6q7k\n3LnzLC80FEXEupQ5aSFBKkg+kUKDI+BMhciE0gwJoSL5JaId5bEUzlJS8MAdt3Py1CcwRrGSof+n\nP/9FJDfAmX+kIyfquqZtG5p6gjWOYTXAWZdzjW5Q5qzxiC2xRW6SClHxSUlqwBao5ppBWZbUszwQ\npwk+V7Tbhs3JOrVvqeuG1fVNVs+e59prryWlvAPHGHP12Yd8IqRMLtzJxoWOSiOCj4H9+x2Hl6Zw\nK/By+PhfD2k0MRMl2IJgC4x1qEBMoRsylBNpMYKRjOIBjxDI60273f2++wo+daaEr4TyA3DjKzbz\nEFRNeTyECG3K4VWgI0gCqevviTFui2ynPNbbx8wU9lEzUVEVnCNaRwCiMURr8SZPYkOV0Hp80zAo\nS5aXFqhc1jTOvLhMh1cfUB/ZWt/gbW/7H7zlLT/HZz5zB2LgqpXDHDl0AENg32CJIpbEGmKjCBZj\nLZCVeKJmhZekA4qqYrAwRrieyXSF6SxSN4a2LYhpwNQ31MHTxEgyWUvN4NAkTBsIuh/siIEbsm/x\nKk6f/HvuveuDDFpFosdTcejodZhqAZJFdrDYH812sShpKUvDi77263j2c2+gKkf891/9NTAlW9M8\nowOB2mdGcgRaHzC2JKUAVqiqAaFj4NZ1jQ8tGKEsK2atR8mJ/9Zsxtak5tDKCrPZjLZuiM6ByFyJ\nXlNiMBhkloAqReFIKQ9MEhHatgW1POs5Jyj/DHg2fHaj5HP32m4klWC6elEy20J35CI1mhQ1WXXF\n7QiRgAtOlZ0hp6ry7nePefbrW7gZXvmWTd71roM4sgK+oFgRoiZEMzXFdeBISgnryu0+fCCIgAoa\nFN8VIX1XvC0KS2kMhTHEkLXYNEWsglFIEvP91mb94CrD/d43GOOIMaHELPJPZG3tHDe//Tex1vGd\nr30tz3j2c7jywH6i5ovu1PQ8RhwxGVIUrGaH1o4866rFPEXBBQYsM9ucMq1hOI5Ug0ibJsQEvoXo\nLa2PhGCpw5CN9iDJHcS5KyjKMcNqgbWH7+XeT32YpdLigqKitMWI53zdNzKLCWsm3H/HPZe8Znft\nZBGB1s944QufyzOf9WSe/ozreM13vJIYalJIWXElgqrQNCH34cdE1EhE8T6ysbGJbzwihtb7XEOQ\n3NtSFQVFVTFrWjY2tzh44FA3yFWxzmWUqx85181p9N7PSZg9g7kfiRFiJITIy15+Hn4L+Gfwnt8f\n5ddCrtSnEDshN6E/oRRA8ofTC3P2DvFoBeGdjoII73vvkPZVwMfhGQdnXP8Miw8esYYkQhDFlI6Y\nj8Nuk5H5bh1CJCXNY+Nioqk9k+kMRFlYGDEejxkMhgwWFjBlQRDBVAXRQqJLjDRhjXQ3I75tiKFF\nRKmqIk/jspbQNETfnTIaaX1N3Ux53/t/n5/6yTfzsY9+hFhPObA45ElPvIblxQorpnv/BKMWawqc\nKzFSYGQIMkRdyXD5Kzi48iK2Nld4+GRi2pTMGksTCupQ0sYBM11ifbpEHY9iiutYWByzvDym9at8\n5u/+iEXTYtThRYnWsHjgKAcOX4exI86ePMZnb/vwJa/Z3WMdm0jpCj74gQ/w1Ke8kZgiX/3VL+Su\n2+7gU3f8A7NGwcQ5KbJpsrSRbybzbsiyLClMxZnzq0TJIijtZEJhHVYT58+3TH3L/sWDDFxBPa2J\nMVJVFdOmxbmcyFtrsV1X5vLycp5t2EOuMeQ6TkwsLU94znUb8CFIvw7vvzF3aiaNWUJVFWssGrIT\n9m4TJJMPjeQiYdpBYHyEw5htbXwBzq0mPvRXFa98bQO/BTd+8xnuv2cFiQFjLYUraOp8GjrnKKqC\nsqo6FX3DrG7Y3NxkcXGB/QcWMIsls1lLiDWqPtdFQsJtbWKGFWXpsGIohxUSI82kJmikiuBMrzVA\nnsYVA0lym69xJbZ31pRQciipChvTXKz84Af/hPe+9328+OtfzI0v/1YOVWP2jy3nzq8yaSPWlngi\nxhoiTS4FEGhTyNOPy2UOXLGIT6tMpudYX3+ItUnLWm1Ym7ZEc5RhtUJVLmHtMkEr6mnNJ//8f7Pf\nGbyf4W2LtUt8/Stew/jKa5mFkunWKp/42J8xml26U3IXKfpZSvOBvz/G1tqEfUdWKK3jhS98Abd9\n+r4u3haWlpYy7aXLMUzRwZvAbDqjlZBDAt9SlQUaoQme0LRImdg/HjMoS+q6ng8v7WN56AcARcBS\nFEO2tiYMqiEx1rmW0w2BrYqSb7pxFfNe4JvgLz9bceKEfcQF3/e3b6N3csFjclEy3z9+wX073icR\n4d3vHvDKNzTwRnj5h87xP35lhdTBqH2bgohQliUhBI4fP05VVQwGIxaXxgyGxQWt2aoG5wpihPFo\njOiMQyjTGNnYmlEOKsblEFdUiBQ00wkhZUSNECnoqploHgAVfBa+s0rhCpAhTdsw811/fNtijLK5\nuUFVDfibv76FW//64/yTpzyNV9/0Kq5c2k+xcpj16YyzaxNCTBhbMGtnVOJwriRpog5KG0YMxwdY\nKJ5ESE9g4eCQfa2wOWlptSRGIVGBVFuj+tsAAB1aSURBVBw/u8Htf/MBhjrDhzVMNUJHR/nuN/wA\nszBiWic01vztLe/nQOV5+Pj5S16zu5ezGIumgo31ht/+X7/DP3/j9yN+nabeZDSq8LFma7pBU2d0\np6oyUtGEWd69gMIWeV5g3VCOBgRfk0IgpsRGaNBQs7xvgdn6BGsryqIgaew+PEOMff+C0LYeI4oz\nBSkBXdy/c5LWjd+yCj8AvAne895RzkO40FlSSl0fjZnXirpKI7CdvEO++5H6xDtNEYE///OKc2+B\ng1M4/HDgK18w5fZPHsDaDK/3NSgRAWtY3r8/306JppnOQ8r8Gl2GSJMgWGazBmdLtmYbXPXU6zg8\nKDh+4mFiVApjGAxHjMcj2jBh/dQ5KixiBTFKaQxROwHytsHYHFKpMQyqXOOZtg2lLdEIYg0xzChd\nBg7uvuvT/MK9d3PVVSt8+2u+jf2LyzzlqqtZrxuOnz3HaDTM+Zgtc+drCmhlmMb8edh9R8EUDD0M\nFoXprCGqJanDJ8et7/k9ZHqMYVXiyxVe8LXfzHXXv5goI7AtVen52J/8MeH0P7A6PceguDR0fFlE\nShHZB7wNeAZ5W38DcC9f5OQvEdHXv+7bKIuKGFqu2L/IN9z4Uu6/6y4+c9d9nF2dklK+UCaTFu89\nZZnJk5iIc53c0WRKinmGYasBg2M2qTm3eg5bOo5ecZC6rhkN84UtbI/CtjaPzO4T+B6FQZWkStPW\nCDLfvcfjlve/+5NwFJqH4YavOsLmhsn1oosudCMGYzvaokh3cjFP6HdqB+wUwxORjHJ1pxOAplyT\n+o8/s8b3PTSBVfijb1rgZ//DtYSOVNr/7yTgimJe4HQKTZNlVoui3DEuO5/O0WemAiIMrMlzKlHa\npqEYFAwHA5xYrHXE5JG2YfPMOcZVgdABFiYhajARjLW0waNSkkSwIrQhEEVp2kDEMBxUpBQQNYgY\nYoDSOVJsWFgY8u3f+3quu+4pVMNFJrOW02ubBGAaIoZEaS2JhogQokNMLkzaoiRGT0SIyRETOBU+\n8fGPcu7cOi/8xpcjMkR0TEqGaOG2W/+Yuz75ZyxGmPjzqBjuve/OxyRSXq6z3Az8har+hog4YAz8\nJHBWVd8iIj8G7FfVH+8mf70T+CqyMPiHgKdq3wDROcv33PSKDtJMjMqqq7orCcEU24gOavHe431O\n5I1TfFd4LIpcrfcxIkVB9InPP3iMlcMHSCmwf3kxQ7cdJSTFlDlV/akRsxpMDBGx+U1PKYdo3uew\nZTDIFPdnPKvml3/wDngD3PW7jm/4hsOPmqDPHWYneLAzad9hfU7T0052njrzMA6HauKGZ9f88S+f\nga+G2d8LN73iecSUcOW2swRNXTIfuw7I7Xkt/SahqrlOo8qgrIgoaoRC8npPnjhBVQ2gEKqyZFR0\ntQeBQpVzp05SNzWLi4sMDEgMgMm1IvqaUb7dv/dqClQstc+MCmPyBIAUt8EVQfHRUw1GFK7kO7/r\nu3jm9TdQVYts1TUb05rNukWNxadIVMUgiC3wSbFFAVpmhVBXUAcl+YSzBTE1BCp8zKIZXhP3fupv\nuPVj72NBtvCTgC3yBvXZO2//4lnHIrIMvFhVX999iAFYF5GbgJd0v3Yz8BHyqLz55C/gARHpJ399\nfOf/7UMU7wPBjWh8Rys3CRM8mrSrhwjGCoNu0Ezj63mI03dLMmuZtZGHHzpGUVgWFsYkPNPZFN9m\naLQsS8rhIDeFxYSKYk0OXcQqSXPCXDezLLgtuRe9bTNJ8aqjNdwPfAU88IDtLsBA7ueQORW+f21F\nUcwdpb//4jBLOqdIO/KbudxqviM3J6GcPl3APqABY6EaFoAjxDRP7qP3+TV1/z92DOOdTuu9pygK\nnMknZuy6Tl13ch9aWWE2ndHEmqbW3FnqHE6VshzgEwTjOLMxodLEodECZiBzKFxToqCbnGxyv1Ci\nxdiKQeEYFJa69ZCUjWYKyeC6acvWOnybQ+N3vvMdCMJrbnotz//KF3DNoSVqtZze3GJjs82nkWpX\ntbcYcSQy3E8QrCmQ0ZDgPdgKrT1lOaL1DQ/d+yk+/qHfZeBaorZZpZ/B/DR/LLucnOU64IyI/Cbw\nbHLD7Y9w6clfOx3jUSd/hRDmdJbWh6yCbwRni6wk2bYdDywrUXrf7WBdV2VhLSKamcJ1zbETqxSu\n4onXHsH7Gh9qxqMhoatmz2YztiYTmEwyIlaWjKoB3nc9MmVJ0zQU3ammmhAx8zBm5fBWdpYnwYMP\num6Hdo+g48D22PLt0yEnxP1J0g9JfSybJ/2AkAulz7rBZ63m58L9949JQUkEjCsQmIt2xJgnERdF\nQYB5ztWfKHNZKRFiG0Gg7Iipffg3GJbMNhuqgcthafAUA8uBA/t44MHPI1piNeCJPDTZZFzDwqgi\nBp95a6rYrj2gcHmAUN1OUOOQlJnQ0RpG4zH1tMV37RliLaFtaf2MwbDEOcMffegPec8fvIeXvvgl\nvPRl38Sh0QJH9h3m9Ll11gMkn2tC4HAyQJwQEhgpkOBJKbdzlKUyjVusnTrB5/7i/ex3NTHm2ZIm\nF8J4dOLRtl1OncUBzwN+RVWfB0zohq321o2UuFQ894jHnHPzOD2HDZJ5X0lJbcbqS2PzaIEUcM4A\nARcTJgRSO0NVefj4aU6cOM6oUq45uoKV3JBUDUYY4zBkSnthHeNqRGkKKltChOOnT7G2ucGknjGt\n645eY7uL3ZPUg0SQeMHJ8uCD5gJnUMi7XPfVN0a18zkmmcZjsKC2a87agZaZ/KVy4Rsl9PXOxDOv\nn84laO+9Z4EQtSswxiwO0W5z4nqH6EmVfcgqeaEYBCuGwmVnUB9JMWtrTScNaEFByWxj0s1fbPnO\n17yEN/3Q92XOWWwIGglJ0FgwC8KZ1SnTNuHF0lqIkjchHyJGhKooMSmLFYYUkKQUqhzaN2bfYsW+\nhZLKJcRFisqxubVFaDxhMqEqhFtu+Qg/9VM/wTtu/nXWzj7MvhE89ehhrjm8wrhapCjHYDJ44YyD\nmDBqqZziLGhaol1d546//APC1rlcvDWBylmGZYUxgm+bL+gIX8iOAcdU9W+7n38PeDNw8v9l8tft\nn7sbaywxJQ4uL3P44IGMUinY7kP33mPKrJo/nU7zUV04rBEms8TqyXOcPTvh0OGD7D8wzgTM1Gn7\npkTtPUal44k5VMwcSXPOcWD//vlFFWJgfX3WtSoPc9yOzC/8o0/w2VleB5//mLugTvJolJXeLn5c\n5pXKi+BimScr3YnSJewoSOKGZ8cMsbwa7r93gZR0zjjuw7Dc72PmrIQ+N1HtmAjd0/WOngCNmTzZ\nh5IikomXZcFkEjl3/jxf/VU38PCp87zjzT+JxszT05RybiKAOsQJtQ+EyYQBBaMFg3Uj1FQknxVU\nhkVJ6z2zusa4TPuZTTNdCYSqcAyqDBNbHWA00cxmSIhYV1EUBZ/5zKe549N3cuTIEX7wX/wQw6Ul\nrllZZGMaWN2yFIVl1ibUGMAgdkgbA9ONk3z8lj/l2P2fY2wz26Gyjslsi42tjb56dEm7nPksJ0Xk\nIRF5qqreQ57J8tnu6/V8kZO/nvSEq1ABK4aqGMzh3KZjAo9Go/kHJ12xq209i+MBoTVsbM5YXZuw\ncvggy/tGqDZo7HpRBDCWwhQ5edaYk3mX4/e+HlFgCKoYyegaQzentkynq4iY+YyXq442O3IWM4eV\nL77o+wv+wh+zYER/ABu7XYe52OHyd+3CAjMPjW64oc2qMz8Dn/uTwfz3+5pRURRI3HbgPm/qf3bO\n5VO7L7b6bd3jbZ3oHK5FTUQD433LTNbW+dhf3c4nby9InX4b9HmYyYiigKjNdJeQOzWfds0KR1cO\n8JG/u5PkqkwMjh5rEuNBSRPyZ+FEMr+s3yQ1c9KGwxEhRUJMtDExm04xhcNag4hy5swpfvZnf5ql\n5WVe/33/kv0HV9h3YIXN4DkXG7ZaIeoA1OBnU+7+9C0cu/tWBibgkqCiiCpLg4qFwQraUZROnj7F\nY9nl1ll+CPgtESnJl8wbyAHeFz35iyLHskkjNsb5BTwc5A7J6XTSCVq0tF2SXlUVPgU2t1rOnF3n\n0OErOHCgoqkD1gy6FtrIYFAQU6Sd1iyMxnjvcxEO33VVapfkCi7mMCGpElKcV/gXFhaJMWUuWbvB\nyrKHkxCPwkMPbUevFxcZH3mfohfBw9rBzX0+8cjCZZ/d5P9x6GDiSJngHNTXCMceGpNrMNt/F0K4\nYMxb/1gfivleG6BbQ9u2XVKd86ukfa9PIqoSrEJUTDEghJaNmcekiNEI2gMa21mYKiiGRMIMCg4v\nCy977gqv/95/zc0fuIOP/vWthGRoplMEGFSOEMmDl1JG74wt0dYTJTtS4Ry2KCkUTBHZ3NxATKJw\nY6AgCZw+d55feOsvsLx/kX/+nd/N0a94KkuHlzm9VrMxbVnbmvHg3Xfyib/8EAtmhpFIIUOMc7Rt\nDQi+mRE151uXsssdwPopMhR8sX3Rk782zq9ijGW0MCZKJLQto8EQSYFyXGWRhXrKwFrq2TRDitWQ\nY8dPsLq2yhVXrDBeKEgh0MsshKh53LXvk1XL5tYUVaX1MzAtkMmTIoKKm6NDueejAGdpfItv8/iB\nKIl/cv0Q83ngajh1zhF8hZLlenpEvK9hqCqoxXQXImq4MEoTNGV0ykgPDMh25b/LWzRrrJIK5dk3\n1Bck99Hn/5+fS0kxOwVGMXbbAYLPa/FtwvvQIXT96WMwkvMr6fKIHnIGUC/dWpU2JJrkGTvbjRG0\n9H6iPdtZI84AanA2MSzHaBRCvc7Tn3qUf/b9P89td53kbb/0a7QbM1of0BgZWpglRW2B94rtWRaq\npJjQkIGLpcIx2rdEHROaLG3r8TEjlYOBsLF2nv/2P3+N4WDMa17zOq699kkU05bzp9b4+B+/i3Hc\nwghYYwmphrbXgfBU5QAfzQWijY9mu6iif4i2bZnNZsSiILQeURgPhyQRxBqKwZDCCsPFZc6vbbJ6\n5hzrG5scPHiQajDsCJQBYyx1E+YJei9+0V+AGWo2hK6GIl1NQVWoqmHu/qtGtE2T5ws6R9Pmbsty\nMOTIkXPzEOz48cEc6oXtnKRHogCs2EelvVx8guy8fw49m+2fVUFj4Otf7OEvgRfAPXeN56dCHwaa\nDqItygLfBoIkrHVAvACI2AYcMpG1CX6OkqUEIaT570uHDGVxQksdhGntGZYuo09KV0tR0LxdRQyW\nSGWy8o0bjDl2Zp1pG4izDb7q+dfz0j/8Pe6549P8zJt/msmZNZCCqrBEn3AiRLP9nvavr1cb9W1L\nWVWghsIOsti491jJc0NnTUsInre//TdRVcpyyPmZpzIRxWcqTMohfVmWFxSHvxBsDLvIOo4+MKwG\nrBxaYTQazWPrWdMwbVrOb2wyaz0+JNY2JkzrhlOnT3P10Ws4sP8KhIIYoNVEEge2IF4U7G1PvUpI\n31gW84DVGBTfRupZm4XffMAqxDagIVLZgoErMao84WiYO8vDD1edlhnAjkr7DsfcWfi7mPvVh1+P\n9nOWKNrOYURAVHnVq5oMq7wabv2rffO6SY90QQcVhziXQQohzp2qH9vRc+FiiLQx4FMiKrQxdoAB\nncPLPCfrf7+0FcZYmhhpUiQZCKpEQ2aI9whcEpJYioUV1lngoU2hKkfEGjQEkgpPvPZJ/Pu3vIWn\nPf95bCl455ChzcNWd2wozjqMGAaDwfbJHQKWROWyOw/KAieGYVWyMB4iBpp2llvKm5olpyy4fLqr\n6rw2tzNU7je5SwE1sKuCFXnHmEwn2MKxvJwnWakKDx87zuLictYSroTzq2t4H3jyk5+G9y2Nz/36\ns7oGa+aQrUaP7V5v/+YCJMlNYrYosR0kJKoUVZX7vUsHMSLiCFEpymIOwc5mM648YjKccQTOn6vy\nJGHJRMzsiD25c9s5uejEUHJ9SJD5Bd4XVy9AzHY4l4jwvOdFjqx5WIPNZ1j+7kfHGJNbn1NSMIJv\nslMYZ0kh0M9h6eHr/sTy0RI1oBGME0Lq5tqoEoNkxTARQtgesJpZBUpslUjO7QZVyXQ6YzAY4JPH\nOTNH1wQlqOW+tZb9jeOqJ34F99x9Pz//X3+dex48QSwXaDYmpA7kaFUxMVC43HJgQsQaujbp3NOU\nUofAxdyflC/8iGjsWqxzU10TPM44lhdGhC401eDZ8go2t10XRUWMuSDd04Kyrpx0n+Nj2645S1VV\nmQCIMJvOcC5L8RRFwROuuZq2CdS15+SZNWIMXLFymFnTIiZXt6dNjSt6UqAiJBIQFLTfGaW7CJVc\n9jZ5ZvyotPNcI6ZAqJtMsoxZGMNFx+J4geADVgyr5wwcBu6Dg89MjKqKOijbYxu2k1wA42y3NiXE\nuE227HYw2Ea9EpplYFNXNNxBfbHWctO3tXka13fAx27ZR4iKSOjyBANiSV2ik0UcEm1XBa+7sMq3\nHkSIus04MEkzXcbl57NJ8pCmTknemm2OWgwJcQarubfd+4RxJXUb8DHiyGwMQyKEyGC4j1vvfIAP\n/tXfYK2hKPfnOTtYKmugGGTWoGpmaAhEFcRarDOEtu6cI3V0fwOdnJPpNqiMxXVF6u4zGFqb1Vus\nwWuCQpGiwFqytFXY3kBtBgCJMNfC5tIHy+62FWtXwCuKYj7Fq/EBNY4oho3JDFcWrFxxBVJ0feeS\nCXjOleQNfLsWAszrDH0OkXdo5m+Sc9lRQohzEMAaISVIyVAWI1A7p/QXRcH5s4NcOToGh1Zalg8c\nIGRq8vz06EMj2IaDd0LL2Wnz46ZTsN/JaO5zqJ3UGJHEt75ymp3ldfBnH1zGJyGooVUl9DWilKHg\nWdMybQMzH5m2nqYNtD6ixpIwORTtcqGUsop88BB8roSrV3xUokqH4BVoEqxx8zU65+Yb24te9CKs\nGAajRYpyQDEaUY0X+PzZNU5vBUK5Hy0PkMgtzVGFumnxKlA4cI6AEJLJp5ZY6gjleAlTjmiNwSIU\nxmK044J1MH+PMPbvVexQTWPIoiWatZ1NShQOhqVjaTRmUBQ46UIvI3kz60PgL+Asu9hWnF9gWZaI\nNfP7DBYfE+fXNlhc3g8CtnA5yBEh+oQ1FjR1iWneCXcWCfsaQl+c64/Xpm1xRlDyRZbZzA4lZQkj\nW9DDtdYaptMpzjlOnrTwKuAYPOVpm7zph49z9vxG97/BFUpRKEWRr4GihEG1gbUpP+bIjxWKc/n3\nnFPKsr/N9t/v+B0LGYBfg43rDZ+4bTmPvFOIMRDaQJ+oqSptyv0l/ankdyJxCtJBvmhmHDhXdfCv\nsm9hH2sb61hbETRiElhhXnjcuSn0G9Htt9/eMTG6kwGLK7Y3JpJBcLQ6xRUFrW8xMfcxKW2HfDla\n73HOZnIlwsxHTEqkttNlbj1FWTKPseN2rti/dhFBU5tFNMSCyc/SO4Ald9kKubzQHxOFc/gQujrY\nP9Ie/L5QFkKgKEqM5AE6SeH4qVPUdcv+5YN4jbkG0nOX6D6clGe9J01dXSKRhXcNpigy5h9jzj4l\nEZOncBYrgpAp9M45zq6usbQwwjmlaaa5eckIdRsR44gYzp4dZh7C3bDyVs/3cmLHCwE80Hbf/WP8\nfBm/85EJvJQd9ylQAD8Jt9xygM0Nj4hDNXbJuOAbjy0sGhMR0/XG5PeHntHbAx2a0SsxueWZHkRI\n8PD5M9kRrMWosDWbMR6NcquEMXPSaS5aKhhLPZthncufheRWBFWHdJQaQfAaEakITYKipG5bjEs5\nJHIVrjQU5SAr7UjCuIpkFJ8aRjODWMf6pOZKV6DJo6V24EZB1mdMFK4ETXRd5R0JWrO2s8niHM4q\nkiIxBgqx+JTAGETzfEqsEOI/4gS/txTzzEER4cTxUxw4eIjFxUVmsxmFyRrGOexKoDKnrIBgbImI\nouoRk+saqjnxFTWIM5RlQdNmiLPr60LFEtRyfmOT8cIiMSSMcXgMkiB0GL5FOHW24sG64ok/11w4\nxbe3Aii778UX//NH/jO89N9t/5wEQhAefKDi13/kMELeBSEnrzEAtqBtI9LR3nvqpunyth5g6MOM\nC0LFuM0AsF21PxccYVbXjEbj+d9sI3Qyh86dy3Uqay6sT0hH2e+dlZRDz0nbMh4v5EShFKrBgDAQ\nkiiuKBgMC6646mqe/7zn8pXPvYH/9H3/Cm09q9MZR5dGJBVsBLFZHw4pUAETIyqK2gpBcJmFR5Z8\nyuFWJlTmkxIUMZlRUEgupKbGU7ryktfprjlLn08oGSNHI2fOnKWqhrlHfpqLidInmOTxBr3o3Zx2\nnugapvIbFGNu5iJ1iVsI1G1NWQ5oGt8hUlkuCMlUcrF54Gt/32SyhbWGpBEJOUx50w89jZe+dJVq\nFBGxND6wtbWVq91b0Lbgg+A9oCVNo3ifL/a2Be/zYzEKTZsIweTHfd7RvDecPLHJze/bhyYhzBJX\nHjmCSXF+qqbkiWq794AuhOqmDyclae7zKJwjyXZo2jtNbzuha6R7/2JHag1Zl2vn3+WYfvtkkc4p\ne+KmdpoCvWO23almurDNdmFfuTAiVRWihmpQsrRvH9dcfw0vu/FlXHXVUSZ4ZlEJyTNYWs69LzHO\nUavSWEzKIym8JqwoYoWqKpnVM6JRTDRYU+LEEk2LJogph9kxbm8eOeQKiOkkkDSHfpeyXT1ZYoxY\nZwHDyZNnePKTn8za+mRePEsp4aTqwo7tZG4+ycs5vI+5KhvzZNt5sgY03ncbnFD7lhCBmKkuidyk\nhFiMG1C3uce8qRuG42WaZkpICWsz0rK2tsB73pPVXHJMbrnjzs8hO6q+/QVYFgMeAQmzgwdmHjn8\nVESoa4OdFcTgWRqPM5LV5V99v0hMSkydOkxeDYpiXYZatRs5aCRLsu5sQrugQp/3ky60zaPyeoqN\n7gAZ+vezL/b2f0vXReqco575rihq5iePAGJtDuESWa41eF77T7+bu+65k9d8z7fzhKuPMJnNOLR8\ngMFwiJ+sU4eAsY6zZ04xKkqElBN1Y0gmNyxgyBucxu4acewbDzjohFqFII5pSMRkCZIIRjNlJpnt\n/Ebj/DWpJtQqbawveb3u2nyWx/1J92zPLtP0/6WteM/2bM92sc6yZ3v2/5vtOcue7dll2uPuLCLy\nChG5S0TulawK8+V+vt8QkVMicseO+w6IyAdF5B4R+VPJUk/9Y2/u1naXiNz4JVzHE0TkwyLyWRH5\njIi8aRfXMhCRW0XkdhG5U0R+drfWsuP/WxG5TUT+cLfX8pjWIzSPxxe5KH0fcC25mnA78PQv83O+\nGHgucMeO+94C/Nvu9o8BP9fdvr5bU9Gt8T7AfInWcSXwnO72AnA38PTdWEv3/0fdd0cWGPm63VpL\n9xz/hqwi/Qe79Rl9oa/H+2R5AXCfqj6gWSrpd8jSSV82U9VbgNWL7r6JLN9E9/3V3e25jJOqPkD+\nIF7wJVrHSVW9vbu9BXyO3Hb9uK+lW8O0u1mSN7HV3VqLiFwNvJKsMtAjUbuylkvZ4+0sR4GHdvz8\nqDJJj4NdSsbp2I7f+7KsT0SuJZ92t+7WWkTEiMjt3XN+WFU/u1trAX4R+FG4QDViVz+jR7PH21n+\n0eHUms/2S63rS7pmEVkAfh/4YVW9gDzzeK5FVZOqPofMevt6EfmG3ViLiHwrcFpVb+MxeL+P92f0\nWPZ4O8vFMkkd8f1xt1MiciWAfBEyTl+siUhBdpR3qGqvhrMra+lNVdeB/wM8f5fW8jXATSLyD8Bv\nA98oIu/YpbVc2h6PxGhHEufIDbrXkmPlL3uC3z3vtTwywf+x7vaP88jksQSu69YqX6I1CPB24Bcv\nun831nII2NfdHgIfBV62G2u5aF0vAf5wt96XL7i+x+NJLnpDvoWMBN0HvPlxeL7fBo6TCfEPkWWc\nDpAFy+8B/rS/cLrf/4lubXcB3/wlXMfXkWPy24Hbuq9X7NJankXWi7kd+DTwo939j/taLlrXS9hG\nw3Z1LY/2tUd32bM9u0zbq+Dv2Z5dpu05y57t2WXanrPs2Z5dpu05y57t2WXanrPs2Z5dpu05y57t\n2WXanrPs2Z5dpu05y57t2WXa/wVjO2xV5Ti03wAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# randomly sample one ref\n",
+ "ref_ids = refer.getRefIds()\n",
+ "ref_id = ref_ids[np.random.randint(0, len(ref_ids))]\n",
+ "ref = refer.Refs[ref_id]\n",
+ "print 'ref_id [%s] (ann_id [%s])' % (ref_id, refer.refToAnn[ref_id]['id'])\n",
+ "# show the segmentation of the referred object\n",
+ "plt.figure()\n",
+ "refer.showRef(ref, seg_box='seg')\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "1. woman in front\n",
+ "2. lady smiling\n",
+ "3. woman\n"
+ ]
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAMsAAAEACAYAAAAdo4LwAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzsvXmwbfl13/X5DXs6853vm3t46m53a+62rUiOsJzBdrCd\nIgG7nOAUwYGKIYFQScoCEhNMEopUTKVIhYKCGAwBh5AASYwdWx5ky5Yta2hN3erhdfeb3313OPMe\nfxN/7HPvu+9JaquC5XZSb1Xduufs4bf3OXt9f2ut71rrd0QIgYfyUB7Kby3yrb6Bh/JQ/kWRh2B5\nKA/la5SHYHkoD+VrlIdgeSgP5WuUh2B5KA/la5SHYHkoD+VrlK8LWIQQ3yGEeEkI8aoQ4oe/Htd4\nKA/ld1rEb3eeRQihgJeB3w/cAj4JfH8I4Uu/rRd6KA/ld1i+Hpblm4ArIYSrIQQD/H3gD38drvNQ\nHsrvqHw9wHIOuHHq/c3VtofyUP6Flq8HWB7WzzyUfylFfx3GvAVcOPX+Aq11OREhxENAPZTftRJC\nEF9p+9cDLJ8C3iaEeAS4DXwf8P0PHvSP/psfZTQYEIeAyNbY7ics84JxHPHs5UcJk2uMD+8greCo\nDKxvn2UyPkQT2D13lsPxEZeeejvPv3yVp77hcfTwLCqLSImpqgmmmONDRGfYJ4RACIp/9k//Lz74\n/veTximdXo+yqfnrf/O/5S//R/8ulW2QUUQ+XeIJHN7ZI4s8wXr29+cMBiNkFDFYH5H0+vzVv/V3\n+OVf/CUMEiE1CLW6TjsPCCFOXp/eBhCkuG/78f/J0RGj9XWEECfHSi8Aj/QWXzc8lsUM4oBVEddL\nw0zHeN0htg6EAykIArwAKe85Dt4HpJAELEIEQhAo1Mn13Uo9pJQQAod7d9nY3Wnv+9S9txI47UC0\nnM5X/sz3jhEn/4+PO/3/tJzsQwCSg9u32DxzFghEkQLnUQiED/d9V1avvjOlwRuUipE+EJYz3j8c\n4jsdPjWfE9IeOorwp64tpEdK+NjPfORBVT2R33awhBCsEOLPAD8LKODvfiUm7Lv/zT/O1RdfYntr\niFcJPi/Z6W+hD/f50ivX6EvHbN7FLO6Se8FBOCQsSzLtyK/dZnN7m9oGpIRuNiDTGWVT4FVNHATd\nJMP4QKhqnLOkcYd+0kMESbksCD7gvATrsHmNL0uMM2SZJk4HJBsjimrB1sUdtrYcPlIoKYiERCv4\n8A9+H3/hT/8gf/6v/DVe+tLroCXBB3wAgm8fmmj/rG8fqA+eACcPSUhBcL5VRsQppWwVJQiwUhAH\n0KZhu5OxKaEWgmuFI1cxImgEFqc8CggEWvW+p3TGGJTUq833FNSfup4SAkJ7NqcUezXQ6eeLlOJk\nHLin9KeBcho4QhyDtr0G92FDQBCr+4bgA0Le+0b8AxON9x4lBSGsYogQQEoEoG0gSIEOHisEBI8L\nkm4Ma1rzxTzH6g4xEhscWkYIsfrOpMR596a6/fWwLIQQfgb4mTc7prh9k/3XrzCwF7Am4IJnTVo2\nMZx/5nGqWcFwrUSN3s5GbwOxu83RZ1/i5tXPUhvHbJlT7t/G2ppf/83fYJbXhCpHxpIgYoZZhBOB\nJIpZ664xXNtgbXOTwcY20loMAkwFBKraU0xz+h1PfbRk9I7HmB9dI1WeYjzHC41CsJgvqPKc9dEa\n9XhK6e7yoW9+mr/91/4SP/Tn/xNeu76HVymx8Fhr8c61iqclIQgIstUNAghBAJQUSL9SMDg1owJB\nIK3FNksudhPOqsBCSm5UnrlWhEgiCEgREFJiT41xrI8hBJRqYYQICCQEjRABIfzJMQJ1ArPj8090\n+gFrcLznQYtw73Bx3z6BumdphCAEf29GX1m3e9cK7bRxgvlw/5jB4YVACIikxHuP9Y4gQSKQtN+D\nwyOBgGArjhFKMC4sxD2c9HghcPjVdQRBqNUk8NXl6wKWr0UaI9jYvUjS76HriiztUE0WbG6vUSwm\ndKTGawjGM755i/W1PtYUdKRF0WCnS2iWXN45z97BPmfWhwwHW9RBsvv0c3z+C5/nXe9+ClkbKmep\nZgtkU/CJX/kI86MJQmuiWGPrnJu3rxEjqJ1gMp4weMoxWSy4fPkJ8rrGm4BC0xtpsqyLqWvSXhdC\nYH17i8xW/MTf+lFuHi75U3/2L7KsJVrHWB3jvIBQ31M2QK2UIYRw72GFQJZ1WtUIFrWabSPrONfP\nWBMWEyqu15Iy6uKFQAnwBORKscTKMpy4cCs3zPtWOY/f33OR7ldqKeWJhch63S9zH0+7UMfvv1Ke\n7vTxp/crqXDBtQBejdXem7zvXO/9yb0oKci6fYRvLZI8NaZ1HqkUQYQTl40gCQSEjKl0oDsvOTfq\nsxCanICOJEJKPL51TVULqSDA/xY6+5aB5UtvvMbd63dYf99zWKUxjWHQG1AsGpq6oZNqJosFfaUI\nVc6Vz/4miY1I45TRYI1gDFop8umUOAQi4RAukAqJXc6opreJykeQWUZ9MGaQSh4/twXW4C7uUjsL\nIuJD3/Nv0Cz2mVx5GWcN73j67RByHr34OEJFVM2SWVVRH+ZceeNVdntr7M8nfON7nuHaa1e5fOEC\ncdZlPJny+IVzfPT/+Ul+/tee54f/0l9GpX2iJMVYhbX23uzoPF6ACCBOzdJp1sUL1876xoBxXMp6\nrAmDkylXcss8kniZooVrwSJaMBACUqkvU+ZjoJxW9HsKfDruuF/Bu/3+fduP5avFYcfunpTyyyzL\nsUVyfuWenorZhJD33bP391RWeMB61vpDgm/9LodH0I4lhMQFh1CreNEFbLAIoRDOAoJOYxhGKS8t\nc2odEQuLRK+sbfsMghB4AuLNDctbB5Z3vuvtJO9+F8nOFuiYICRhvuTaq2/w2ouvs7OxQVksiQ7H\nSB0TRgP27hxybruLkjFp2qeTdTB1QV/EzMqc2ig2L17ECkUWQ7Gc00ET6Q5BWZI0IvhAvSxIg6eW\nAVtMmF2/Cc0MoWJuHl6lV+9jCk2sNJGGkfOYQUamOzzxzFM8JSu0jnn/+57DaonUGbujEdPxEcti\nhvYL/sZf/xE++qu/yT/9Zz9P0h2gIoU1DUq1D0f5cBLDIAJCqnam8+3MG2zDuV5CVxlCiLm6rDjI\nUqTwaGlpHTDPsW49OOsDWNugVHSPWAiBEO7N2qeV47Tyn7Yw7XwrTxT+RKmPXb7jv9V5x/+PQboa\nfTU2qxjhlHUL9/afDvbvxUYBjycQkEq3AFpNMu7kM69iKCnwQSKCb10yD5uRRiYZ1yZTRJYSVu6v\n8KENKJFIEXAEIvHmmZS3DCxKGExTkr96SNrtURuPTjtcfPwRojhilEZI5elYw95syvnLT1C8rURp\n186iSYo1DZ/75V/h4tlHuDY9YHl0SPXZz3HhbY8xf+MqP/9Tv0KcrfNd3/e9TI9uY0xDVRQMki7D\nTsrmZkaIFC+8eo0zHcfGKKXOS7pxH5tZ8nqBbQTOebZ3ziN6KeNixqa2+Njj3JJ8WZF0hxzM53Ti\nBBEsh+Pb/IHv+G7e+43v5j/9kQ/z0Y/+Kv/5j/5VUBqBwIqAjxQhSMTKhQohgHRgNaquOZtmnAsN\nQmdcWVbMUk3sKohThPArl0EiBPcp5pdbAsfpiPpYCZ2zD1gAt1L2Y0U+DaZwXxB++jrHoFIrq3a8\n7T7LgsP7+8990CV8ECjHhIhQ9wgB7z0BgVzFLMdWSK6YXk9AyJYkCFIirOHMaJ1xbTFKo4JqGcDj\nuMsfjytQCjh1j19J3jKwxFFGXizQUUzwFmkN11++zlPPPsf61giWS5pqSRwCnViwPLqFNobJnbt0\n1oakvR7G1Eiv2Vrr8shjv5dOt8dsfMi1Gze4vnSMZ5a9l17hz/3wO5CXn8TJBL9K8Xz2s88z7Ebs\nXb1JnA3IE8vdw7vQGK5PZlhTo3RGvqx45pl389rNG4S64NFzl6gXR3hbYawhyzIQgs3BCCdaJVvM\nxiQakBaWB3zre57gvX/vx/lf/+E/4Sf/4T8iTrpUIbQWRkowlki1imuait1Ys6sUnVhzJa+4qxVK\nx3QaQSllqyyyJQ2890RRhHPuy4By4mqsLMKxgjrnTvZ9Nar7/lgk0KrK/Qr/4LkPul6nXz8Y9xwD\n/PQYXx4DtWTAaTB6/+Y0NLSWOzjHsDZ044RPj/chSlbHr/afEA4tuKSFr5xduSdvGVgO5wuSOGE2\nmZD2MkaDDd7W2aFc1uzdvs5Or0c3SbDGoyKJMQ1NUzPa3qZYLHE0VHnBoNsnBEcWaW7f2OO1l7/A\n5OAIQgoyojQG5wwOh0odSktKa3jync+gJ1fpRYG1Z56CYJGhwmjL2tknuPXC5xmmCahAXdecvfRe\nnPscn/z0Fyiqgjo/oPENiUxRUcKZ7R2cdWyeP0cajUh0j8V4zMbWBo2zbMXw/d/z+/jub3sfP/Jf\n/m3uHB1Q4dEiIIOlq2Ctn2KkYcN6srTDS2XBzUiShgyBpE5aqyBO3PpVLODcKdfqFAu1cjl8CMj7\n3Jt77sbp16cV9ctm/lPHfKUA/ivFMMf/nXMnr78agxZCaGlhpe67jwcp6NZS3r/vvs8gQLjW3dxN\nU2ItORQeIdXJcQ+CWwjRUvy/RfHJWwaWNO7yiU9+mue++X0sqxozMyyWc+gq6iC5Op6QaQUIjLX4\n4FkeTekljl5nncPZId31EaSC/vYZXrt6m8nRgspEjBcNprEs8ilxoinKJVlXIW3UmnGtSbOE6tYc\n7y1GO0Y0RFmHSbnALg/odhOWiwmYKd1YEK7NuNAXlLEgynbIom16lx7jJ/+3n+N7v/sPEvd7WBEh\nVZcP2sCnP/lJHnnkIq+98hK3JvuoIEijmE6S8eE/+yf5/Ks3+Omf+qdc2Fljs6uIfMXeeMHycMkw\ny3ijLrgRoKs0QXiCCsggUCIgFLBSmnDMsnqHkR0CAeUtAgMhWrlS4oSFFUIiaRXDr2KCAEjUKddG\nIKXi2E9pCQRxj0IWAedWwbwUBH86R/RAkP4gsbCyKkq1VDXyFH0txYrJunfusXjvcTRIDc6vPlPg\nAWBppHdUsaZb1pwdZhyWNTaO6QSBl2BDgOAR8n7r9uBYX0neMrDY5QGPnt0klEvObO4SeQW7miYE\nyt4CLUE5i1cOHwJFWVFUJReffZpuZx2RxmgBu2XFlc+9wAvPv8wyzynKAhUEd27fINbtTFQUOVHc\nIbiSaNAjUxFBwSJoMp3grWBSLRjEPYxJCaFLM/8CSVkw6scsF0csa02UDsjQmNmUkpq9wyO6Q4md\nj3HLOSaOmU9Kyukhb3/iUSItWU/WufiOp3j9iy+gpUIS6KaaP/Kd38qf+Ne+nb/6V/4z0kSwmE35\nhndepLz0Nj7zic+QjYasLyoKdEuXqjZH44M4sRhBtEk7GQJe+DZ/EVYgCIpVYv1e7mSlrDZ4pJDt\n+1VyMZxS0GNb8uVxRHuMd/czbM579MoinLYexwA5vc17j9b6AQLgfnnQWpymwqUE674869/esQOh\n0MEzJNBLe3zyxk3seo9wusrgK14n3Jd8/UryFlqWwJ0br/PopSdQFkIqaQSkMkKGHqJpiFTEfHpA\nkILN9RHj8QAmE5pCkPUG1KZE64hO1KObZWSdiLrpMjkYM51PiOIMFdV85Bc+wge+5ZvZWN9A6R4o\nyXKxZH37LNbU/NLPfpRzw5j9518gtyVrwzPs336dSxfPsO0zlN7m4pkBtTVEWnO4P6fbXycJgWAm\nOCGpFjlFOWbjwnl66QWqfIkKEXVZMgqBRMWsDYYsJmO2N9bIm4Kjo1v8Oz/w/bx++3Xe8U1v57Of\ne56f/bnnCaMBa90N3ntGcPXOGCcDk8Uc5xzWCJTWICTGOtI0wZqaWEiENIDEhZbB8iu6WimFVOCC\nRazKWQCkUPfcuCgC7im49/bEtWtLYCQBd5/bdazwx67Tgy7Z6eAfODn2tOU5ltM5oWNW7TTYjqU9\n9zgHcxzHtHFNEG2wHgXLjgzkrmaiU2KpV+UUX553srb9nN455O9WNmyeLzhz4TymAWsLks4anSzF\nGYtUjiiVzI4OGHZ6uEhyZ++QqjFU1QyV9ShdQVCC53/t48iQkUYJXkmMNVhX0+mklI2nyEv++7/z\n3/ET//P/CFrgLYDHNYb//X/4r1nfPUuWwcVLZ3nX+5+lKAuaIjA5M2Jj2CdOEhZlwTTt8tqVK7zz\n2Xdz5vIO1WLChlS8zfV5+QtXefXGdbqjLr29uwQVs55lOFNz8fwZzDwn7WZ459k5ew7rDcVygZIa\nb+bs7uzw4pUbfPzXX+WDz30A+U1dbBAIUYOVdPsDClODh2XZEMUxSkh0rJgvF6ytbWDqCidh7+6Y\no+mCg+mc6eyIw70D8nxBWdfUpiaSAhU8hVCE0AJFRxGmrAhS0Ov1iOKYgCR4sKYBFEqHFaEgVwAL\n91gt38ZFLQt8T7l9CCsi4sHyl7aUx3mPEvKEhm4th7wH4NVrVvv8KZYuBO6zWiF4pBcEJMp7doZ9\nDkxF6PaIrCVE98As1T1qXEqJ8x6pxKrM5qvLWwaW/vZjdEwgG6QU+YLleA/rLB7FG9du8sTbHqOJ\nJUXTsJguiIRkZ3sX42fMZkvqcs71N24SOUEIBc7VeCeYT3LW13a4ru/gmooQGtbXBkglWBQ1OtL0\nez2+4cknCY3jjdde4fFHdlgfjqimC7xQ4KaM+oosFczmB8ynM7JijfHtI8LjSw5uXKc76DObzHik\n1+OFa/t88Lm30x/0mc/niNEuzXzK5to5Xvj8Z3jHc88yGI0ItSO3HivAS8mnn79CuZwQRQKjIp54\n4j3IpIPwEZvDET7ULCZzqrwgVhpT1iQKIhkYjfooPN0sZm9/Qk8LBpGELNCLYs7vbtGX5wlNTjcV\njCdT3v97P8BWFphVnp/62Cf5w9/17Rzu73Nnf8wHvutPEvd7bYbdWP7Oj/1X2EXOYn7AtJiT5w3z\nomZR1izLksoGbNO01keqVY1WBEpinQffsnHWWbQStIWmkiAMQji0j1FO4IVv6XMlwAWMkK2LKFVL\nHYtTJTxeEPwqSJcC5QLCOYIQiCAQEipt6BU1Ktrmjf0lZVKQZjGaQMC34F1VPQR9HPADWlCFt6A2\n7GsRn0+IlKae1pSLBd1eD5MvyYbbFPM5KjgGnZRoKFjvBILWBNtwtO9ZljOuvPw6RWnJq5punOLq\nCh881jiu7+8zHAyR2qMv7RACKK3QWYo1hllegKy58urLXHzqbdy4dYPNnbN0N9fp6C5mMUBWCw7u\n3kXKhO2zFzkYTxFCMJ4uSGNFnERsjYY0VcPWxgiNZXp0B6ElV195kacvP8Jk/xaXzu1QzA5RKiLP\nS+IoozIN1ngund/hiXd+iMZD00i6SYcvvfwq83nN0jqMc7xy6zqDbq/N+rs2U++9Z7ZY0O90OJzN\n6Q3XmcyOcN0UFXcQIdDNEqTVyExgfcF3fvu3US+PsC7i05//EhfPXuQzv/RraFVgpCSSCuscUaoQ\nGgY0LKsJW1qQpQlxv0tlHI3z2ODbWElK+oMeTYB5vmQyz5kvCmbeUiwq8jzHW4MmAhFQRDRK4JF4\nGbBCILwCeRxTgbMGoVtiQgRO8j5uRTwI5Ande1IBt7IYtYLUGs4mPUxd44MlTeLW9WTlAhLw3qFX\n2/yqTu0YQG8mbxlYsijDBwuuJoo0WMvW1iaVlxTzGcE7mqLBKIetGrQLKATTgyV37+7j6opBmlEa\nRxLFREJSNhWR1iRqHeccgyrFb3tqU9E0htJUxIniT/+x76WfKRIVISLJxuXHELZhtncL13iM7HD1\nhS8h8XSGfbyYs33mLHcPPstjecUyr9mfFdiqZHNji8HOWVxT44zmaDxl8/zjFM7TGQzwzmCRxElG\n6jVNVbO+vgXCUS6PmB3s01vfJBaC5XKOC4asmxCCJM9rOv0eadYhUopYRzjXMknOO5CK7Z0z3B3P\n2Fpbp2gKTFXhveBwf4J3kEbQiR2ff+FlepmkbDyPXH6S2aSkbBS9bMB0WSBVANUmMbUSJNpT4DCu\nJoo8UhhsU6B1Rhq1SgYW6jm+MfSQhEiys7sBVqB2W5rOh4AY9qgbg7OBW7dv0zSGu0dHWB+ovUer\nCIFA64go0RjTtLkj6wktklowraocWqpXEGxLUAjZ1jNEAbrNgs2NbWaLCh9FLaHnPDKOILTw8sLj\nkVi3isuOUfK7Nc/yuVdeY304pKhKhNAMBwPuLA9YX9uk0+uioqilJL1EOItCc/3KVWbLBbauEF5Q\nlTUWqG2NNxZ1XG4RxwQJcZxR1zX9rEcAvu27voOOWVLd+hJmPMX1hmS9IUVhKYqGslpw4ZFHsJnC\nSc0z73yaepkTRZoQS/7IH/0e5vMpiRoQb26jlebW9Vts7ZxFK48LgjMSdG+b2dEhe3duonE0RUF+\n+5D+YAMpPEevv9H620pxuH+bTpKyce4Sk1lOFsfUhcVaw6CToRF0soymrlvLotp6MGMNKhY0TYP1\nnhDFYA1pJybWEWmaIozHVhWDXozWHTZ2d9h57FHKcsFFH6jqCxhT8ba1HdAagUMKjcdzsMwprMUf\nxybWIrKUygmyRFNXFm8trjQ0RU1AYAGlNUJ5nBAsFgsGgwEds0RaT10b3rY1IktSxOVHqa1D9boY\nbzHGUFUVh4VjPB6zWCyosNiVigoh2wptWkbwuIUhSIiFRwbPIxtDhrUiVZI9W0MCggihI8Kq1sYT\nUEERQsvgsert8cH+7k1Kbu6cZXP7LLfHE558+gkQGiEDy4MxH/zOP4BHUdc10hviTp+f/r//X7zx\nNAFwYJqARLNoaua2QDkHzpIlCVEnQoWAC4InnnqMXidFSUs+OUREHhclaNmj2+/hXEDr1g3Z2D1H\nXVVkSU2aKta2t5hWFa988Qu8/e1vZ7q4y7DXZbyYEdvAUsD+4QGPn13D2gJkQp7X6LxhLUugFxHH\nXSZqwp3JjEeSDi4TZGfOImSKrXPibqAT9aiOcl597VVeefkKxdJSVTVZR+GcYDFdkCQaS41AUFno\nZwnBObK0w9AajIc06dA0OU5YlPYIYUl1j26nS9KV9BJNPbmFF5JbNw6xxZzRqMv+nQPWHn0WITWS\ngHOB/mCdYrYg6yTUdUXwrUuUaEVZVgQbmI3nDLIOEs1wbUSDpzEGW7cA1lqjhGgtRBCtBVEQhMeH\nBoKlmi5wPhAnCVmqWEtiwuhsa1kkGANRlHI0XzKdT7mzv0dhArWPiKUmTQWP7axxeWuLiAX7LxeY\nxrK+MWCYRlzPc5BtiVAQoQWIF1RNTnCeyAukkqRdTZKlfOFNdPYtA0txdIv4kQu4o5LQ1ARpqJ2h\nsz7EWk8cx+gkwnvP6y+/xva5C7zx+nX6SrTJyxAg1KhlQXfQoTYNxBF5XZIYR1CSf//P/HsU80mb\nJe/EfOSnf4FOgKaa0ul3sXmOs4HD6YJIaNJbR6xvjOh1dzmY55STGcMzW7y3943M946IUAgkwcF0\nPGGwtc2dOzdYPnOZ/mATLxQ6rukTUec5nThjMZ8wTLtcOqMYpQkT3aBlS52LKONg7y7LZp9hb8DT\njz7K1Vde4+xjl6hrQxx5XOnw3lIWS6w3BB2TJRlVXZHEiv39PbLeoGXxyiX5cklelURKsbExoMzn\n7KoR67vnGO2eRagGZxXVoOFuOcUhmC1KcKGtclAKhSDSEmMswTR4LZBaUjc1UQShcQQCm9tbmLoh\n6aSUtmmZsBVVPOitqpZpraCUEZHW5GWOx1PXNb1eD+EkkYBYKFzjiJQiiiJMY8hUYC1VVM2YC33J\nbpzyts3LxJ0u82VJ0okoJxNiDJRH3JrNeO/7nuXm7bsUd/boKMOH3nEZIWO2zuzSNA2maRgMRsxn\nB+AstmpQAtJ+S2n/kzfR2bcMLBfOXeTWa2+gEBzd2iMbDVCJYjI+YtAZUVY5Ko7Yu3abm9f2yOIO\nKgjiJqcfeWQU4YIjFhLckq7ylPmcfpqxMeqQ9hIOXvw4EWCUIjSW915aI3hPlSdYBJUTrJ05y9vX\nN7FCsbE+oqpyur1tnnruWYKt8fkCffEMSb/H5379NxiMY155+WU6StNZ32c2m/ETf/fHQUVsbp0l\n7WX0Oz36Wcqw3+PuwR6XLz7Ordv7nH9qC60UEsHd228gvWQ6rtne7rCsp8xmc4bDHrYuiZVGikAc\nK4SOQTR4rynzhkkxYbQxxHvD2tqQKEmwQaI3+5jaMpsXeCcY9DS9zFKVEy5evkjodigWOb2sy/aj\nl9i8sIESmsFZTxCc9Ji44GiahkgrbG0JSlA7i1aKCEmadalMTV6WeCmIpSa4gKsNiY4QOsbWzXF0\njiNgTE2k2nyIB6TW1MbgG0OkNIvZnLW1NQyWOI6oqhIrY2ZFQ7ANg25G2s1oGke1nBD5wHJZorwj\nSTJUFLN7dp1r+weIOOLSpQtUTcGmioi6KdX4LsE7NFDVOToYIi1pfM3GaIRKIvK6elOdfcvAcrdq\ncLnjc1/8Er/nA89iy4ru2bMUuSUKS7T3XHl5n73DQ3rdAYt8SW99k2K/LRvPog4egYoaZIB6dsj5\nc+to7Ym7EelahyyKsXlBJ4uYm4pupGhsQzJI6GU95ospMGe6P2G0vsX8cIKUjtn0LkmUUNU1/Sxm\ndphjyfjCC1/kD77zOX7Ps+/BhTZPsLu9zqDfZTGbMBps0vRSGlLOnD+DTBRb4wVRkrCbJOxLONib\n8uKLn2K8d4NB3AGpiDoKLWEw2KReVvSHGUQpUmt00qBDIPguTktyc0gnjjG2YpD1CNbRiWKEEJQh\nQrDgFz72OfLK8wN/9AN0lcaKLuW8pKoaks6Ig8Mpvlqyf+M6xlT0t8+wdvm9ELWMlJStUuMDXghM\nY4iDQ2mNsRVWtPQr3pHGMVVeYZqGEAJJmhKcJShFXRtUFLXNbRLypiJOYtwKeMY7pBBY51CRZjqf\ntfuNa6lnJELEBKVpaqiFoW4aOonG1BU7wxFaRnjrV52ObrXWgGtdyc6A2juWkwlprIi0XLUge0pn\n6aUDUJJxlePnFhW/ORzeugx+1GF0qce5i2dZmoJ6ssTu7ZHtnMFWBddef4Nbe21OZRqPyfMc5z1G\nRzgfUMbuJlfPAAAgAElEQVRQ5jXBOhq/4Myoi17boCgLXr9zyMXuFjeOclIdoZJ16O0QdTWJihkf\nHtJkGcGk9Dt9yskh9bhhe2uNolwgopQQJEkcY40llpp8POHJxy+x2UmI17o0eIrJjK7WqNAwGnYx\nZU4UeT7zpZfYWuuzOFiQqA7GNmwO+kQqYXDpEdbWhlx+4o/z8vOf5dKjj9KE1r8PZcMLX3yRu0cT\nmmpJ2TTk+ZxgPd57auHpyQjvHEppJotDVKTRVU3TVCBS1pRhUVU0LuEoL9hIEjwwLxpG6Rp/4Yd/\nlB/7sf+CN75wlY31NQ4O9k6ax46z4AKwxtCUVdskFwu8dyg0cZIipGQ8nmCtQ9SObqeL6g+oqgrr\nHDJ4ghdIpds+fxfIi4IkSeCBeiwhOEk+ZlnW0sTOoZWmqWtk1KqocR6pFZ1OhvBtZYK3hsLUaB1j\nTEOSKaSApqnROmorumXLyFnnMLZCyTa4j6OEfLFE6rYKoLc2wPxuDfB91VD5I2QSMUg6qK11xkXJ\nqNPn7//Df8zFrQ20hUs7u+R5zu5gjbpuSPsjrHU4PItlyY3ZPj/4x34AWyxwFpx1eAkHBxM2NhKE\nF8waSdbf4tBVJCImjDTXjg7o9tY5zHO++OJLDDe2eOnaKyyKkpCMiIVDOUu/kzIa9AlNw3I+w+yc\nZzabkaQJ2WhI5AWT6ZjOcI04gsp5fF6xGE9RUVvL5BtDEqcILGWeM8o008M9NtYyJpMDBqM1JJ66\nyIlVYNhLMCbQy2Kmt+/y2PmLbSY7i2lMgykbvLEUyym+LKnGM6QS6K4iX4xxvmaaLymnS+40R6xt\ndDmcjnnjxg0+8MEP8vM/94s8eqbP0WJCEIKDoxmX5apQEvDB44NHaY0SrdvkaBN/zjqqoiCOO2Tp\nvUx4WZarui+F8J6qbpAqwliHJqBEaJvfhEZHEdZYrLWkOjqpBCjL8iQjb4yh0+m0C1BoTZ7n6Eiv\nikEFURShlCLSCSBREqQKJElCXdd45xARJ+MgA1p2SLXEVEvqxtAfjcjrEikFyyrH8rs0KZkmGc1y\nweZgA9ssSJMeNBU/849/Gu8EwQQSVvWySuC8IcsSrCtIkpi8avj9H/oAL7/xOnp5F2Esnd46y8WM\nYlnx2IUtTJMTBQ/zMVd/83nUcJsLZ3YQ1ZxRkaNm+0z2DvjGc5cY1yX94Rl8TzF6/Ek+9Zlf5/0f\n+BasrymmcwabW5x/17vJGoe2NaiIyY1bvPTiFzn/6CU+/qnPsbY2Ypj0mWN44+4tup0OV69+gefe\n+xRIjSkNcZKAcfjCkA23EHkO1oJsV1hR1iDLilRKrINqOQWzjdaSoiyJVUQv0Rhr6aQRla2pDsfs\nnjvXJgxtjaiX1Is5ZnpAFGm0k9T5nM21IR//tV/nT/1bfwIR5ty+OmYwWke4iOACwXuCkggi8sXy\nVBVuuz6Bo41ltFRYYxFRhDGmzYwL0TJhxiCCxZhA0omIoogotNS/lBIlJd46XN2Oo+Sqr8a1VK5U\nihDAmBLrHN4FrLUkSYLznsbUnN/dJYkVwXmqqqYsGrwXmKYkTVN6vR5Samzj2djYAjzGgwgO0yzx\npkKJhPls1pa4aIlSDlvXb6qzbxlYXnnjJge3r9N8qiKJJDjwKG6OS3r9IbUzJFjmh1OCEejNHY7K\nOVksefLpx1Bpggk5V19/kUfPfCOzfM5Gr0+qErLNPvu3boEM7Oxs0jQlTz12kV/61EuMUk8mLJud\nDOsDywX0RzFJSNEeHIoIy97NW9iqpJrPWO+tEaoFTZ2zqEpM3dBUjqST4TQM+inf8s7LBCnBBZ5+\n4luYTffppZrdx3bwoeaFl77EU7sbeDLiqEfWUSjt0YmimB1yODnk059/mWJ8SB0USdQlkUAcM3OG\nXqdLFmckGHxd05gly8UctCAZDMllQEpNaWpclfO+b3wvaZpSFHM+/7HPsr29ydPvegfnHn+G+XJO\nx+zT7a1x5cUXuPjUuwjWE2UKfDjpgWlkoLEObz3RqmBRa43SbbY/CIcLFg9opdt1BZSiKgw2eOp8\nSbfbJULiBTQEpA/UeUmSRjgCxC37VZYlxhi0tSglCdSUpUGrrE3K6gjhGgSCfL5gaS21r4l0TJJ0\nscYz6PZQAdQKlHNbsFge4H1bVBq8I40jIMZ6z9bWFtZarHVtglyZN9XZ3/ZV9L8WEUKE2eQWxXSP\nfndALCUf/+gnePHVPZJOQhIaQrMEU5ElMd5JLBH9s2f4lm96F0eTCVrC+ctPMb57l0wJptWCSTnn\nnT/zh37HP89D+e2Vv/jqH8LYEuc8IijiJCGJY4RorYxzDi0VcRoRxylp2sE0BudrIiSmqojjiCxJ\nUVHEclFQ1RVCCZIkOSnATNOUpmnaKmQZiFL4D//mPyD8Dq5I+TWJVJrhYB1f1Vx59XXiJGF9a4um\nzpFFQyftIOKILIlYzpc8/uRlbJaim5JhLEnTlNmN18nihOnNfbrdDlmn91Z9nIfy2yghBOIkbpOG\nqm1HIDjyZUGSJiv6OVAWjqpaAhBFCq3itkcnjul0MqSUqxVnBEmaIGQLlihq3cM4jimKonXz0hih\n7Zve11sGFmzLy3/x+S/QI2aoNSpUaFfjm4rSGKTQBFvxjqefYVo7bFXhfFsAYauCfprhmoZelOAa\ni4oe/pDZvwxS1zU0jiRO8NYhRFuunyUR1jr6gz5WQmQjgnQoJfDBYWqDbQxaSJZAbkpGwxFxvwfO\nIlfEANzrx5FKUswLnKmoQ/mm9/WWgeWlF+9w99p1rFDcDoa6rlkaSOIB3c0+iah59j3PkcYBtMLt\nj3nplde40Y8QjaPTiTkYH7Uzj40Y9juE5n6Xcvqvf4wyX1CVJXGSoGQgXxb0+30a5xmtDxAbj3Dl\ni59lZyDJ4ogkijGNZbacMByOqIqKTpaR1469gwPOnztDkmVtwtIaNBaCpshrumsDci95/ROf4NLF\n8+3iby7gvKUxhtJbtjY2KPKG3affQ24dSjjy6ZJgKt549TVsXjBeTMmLiqWtmS8qiuUMV+Zsrg9J\n+wNsU+GKgnE5IUpTetkI4QRR1tZd/b1f+BzPPfUYZwYxcTfh47/xK/zQn/iTbO6scf7tzzK+dY3l\n7RfZztaQSeC1seUbPvg9dHoZzrU1dn/tz/0Qdw/3UUrjrUX6dkUYKSXClkgBIk6ZlCVarspaPPjg\niHSE0m1GfLFYkGUZuu2QIVK6zeWodi21SLcq+De/4d4CplmWYUygNg4rAgMkBItIItxqTQShJcaZ\ndpEkp7He0LOBOE2QKqF2lsgLMhnhTIMHlqfYNmsdg8EaSaLZ3t5htriLInpTnf0twSKE+HHgXwX2\nQwjvWG1bB/4P4BJwFfjeEMJ0te8/Bv5twAH/QQjh577SuN5WxFpRTA+xRYkXsBYn6F4XW+d86wfe\nhxIWp2J8gEceO8+FS2fbcj1rCXgQns7uefZvHjKZT5iM9+8Hi/MUZY0pG+5eu8l4WZFqjSAw6Pfg\numR4wXH71iHNQtJJNL1hjzTJEHpAU1g6MmI+XrK+PeSzn3mNyxd2aZZT4k5K7R1Bg6wNOhgObl0j\nXtshL5d0+j1saHBVzZoacjg+QqGY3dhjNBgibIGZTyirnETF3Lp2DbucsH80Zjqdkpcl1jrSJEJl\nMRMvub3wRMt96qpACUVlDKq0TI5yuklK3AMdRUitUGnK2vY6V/dv8IFv/VZGmxssbcHdq29QFjN8\nHDO1hqaaY9RwVVAYCKtKYecd/W6fxXJBFEXYpi1HEk6ghSBWirKqiVXSekkhgPDEUXyybphzri1b\n0ppUa0xTYhpDFEX3uiy1vm+JJYDpdIqUGi9kG2OkKcI3SCXJdARSMJ5O6K4N23tWAmcFbrUELbbC\nu3adsUWe0zQNw+GQfr+P1hrrDKbxKNUuieStp5YRXrx5gP+1WJb/CfjbwP9yatuHgY+EEP6GaH8z\n8sPAh4UQT9Oumv807Q8Y/bwQ4okQwpf1kcaRRAbPoNfDhYATbUJsfZjx9JNPo5IaQUDHKa6qiToS\nV+Q4A1mWMBsfoGJBiAXDQUIkMkYb5+AT964xCJb1rRFV0/D4Y+e54Xo8emaLCEM5X+CVJu4M2dgY\n0e8NeelLL7B/1BCpChFlDGUN9YLDSYHc38PpDv/4F3+VYSfD1QXGBurGMZ9NObs5QscRejBDxAPu\nTHPiXkaaRkxLi9EJoTRESY+b+wd8wxMeVVY0dcVRPqYucsZ39/DBc2Znk/HBQbuwhDd40T6m+TIn\niQSamMZ51gdnqOqcxXSG9zmiqlAi4m2PXsSWM8plwhef/zzf8e2/DwjoRc6R2ScKDeP9GyyPCrbO\n9Il2tzDGkNH6+VKAacxJy21d1+goIhZtQOyQFNYRpxneCoRqH+9xsAwwmUyIoqili1etxMeLUjRN\nAwLiqI0Z0k52n26MRiOUivBIDIE6VoRlTT+O8LRtwVmaIqxHCkEnTfAqQoXWAjVFSZalCKVPcjLH\nidCqqqjqEikilsvlKjaCibesjZL/f2AJIXxMCPHIA5u/B/hXVq9/AvgoLWD+MPCTof15vKtCiCu0\nP5v3Gw+Ou7Yx5PDWLYIzDEZdBusjtna3QXqm+1eJLmyRdtcRUhOEw5qCNIooTUNVLkmSlMg7RD0j\n1CWp9BCp+67RU4HS5HSkxOYHqOGIuKuZ7O2RxgK8p5jucWZnhDWO9zx1iTuHR7z68ks8897niMZT\nOqljq9NlvL/Hk49tUhZ9pEq4c1Cjtebc5acYbY0w9QyddXEigtCWjMtE4/KS0nvSoqFezIkjRTeJ\n+PwnP4W3JUJL8rLEGUPSyxj1uxzd2ScRkryYUZRLbt89YLGs6fT6RL0M6yHrD7l58zbGBXrdGB3B\nsJu1fTV+Riw1t+/u8aHf+yHqvGJ+cMj5J8+hZoKNZy7w0X/wIlu7Zwh+xjBLkMSrxf4cnjaPY52l\naQxCQGPMSc+JtQ1xmlKVDU1Zk/XSkx76xhjiKGI4HLZtw6u24rppMHVNlqToqF2Ew3uHSmPcA+28\nUiqcc23JvBBtBbbShBBorMGHgNYaSVuKE/UU1oFKMqpygSOQlwVy9csBUkjyvMB7R1VVJGmCM6a1\nwlJRNxXb6xt04jfHwj9vzLITQri7en0X2Fm9Psv9wPiqP5HXW+9z6fFLpEqys7NGlCTMFgvmeUG8\nfoHDWcXs1k2sF2TCcef6bW4fHLHTVRxOJ2S9jFA0vOO5d9OYCuc9nej+maEQcTtrKYeyCb0kI3hJ\n2u0xOTxgY+MMWjuaylI3NUIpLlx+kuHWGa68/BLn+xl50TDsd7l+OGN7d5emXtCPJZvpFBMCXmwT\nQkTa67LIC1gucLJkeuuAsxtnaHyAXsre66/TS2LyqiJNM7xZtnkLFbG1uUZVLDj71EW++IWXEbXl\n7q1b5GFBqjsEqUkHPa5cvclg0OHc+R2m4yNEEFw4dxbbFOSzKb3dXQ7u3GGjb1DdPnFIuHP9FsPH\ntnj1xVe4+Nyz/PLP/p9c+7Was5HlytV9BmuOaPQYF0REAJxoOwYbawlBEMVJ2yBlDbWxqDhCBkWx\nzInjmKTT/s5JU9cnPfRytYCFM611ElKik4RIx4Cjk8ZtBXBtQMhT66C1Yu2q/z44hHV049a1+/+Y\ne+8gy7K7zvNzzrn+Pp+usrK8UbV3UqutpFZD0yCHDGYYZgSIwQg2hmEwwcIwBIsQLAjYiRmWZQd2\nBqQZIZw0SCDTkrpbrbZq77tMV2VVZqV7+fy7/p6zf9zslqqB3omdmGidiIyMeJnxTmSec985v9/v\n+/t8ix19jJIS27LJdvRo4/EYpRRRlOI4DlgBg8E2S3MdLGWjjUYqi8k0RwhFXmh82yHXJdpoSgFJ\n1Aftveqm/x8O8I0xRry6k9c/+DPfcphpBJRpTDwZMBnBOMt59rnneOMVR9HdFznSCIjTEpNktHfV\nefPbbmf9+CkWDu8jThKsoiQMArqDPq7rkScXZjOE1+D0xhrnVleJkozr37qHe+66B51EvPDCCyTT\nmN5oRKkNc60WUsHevQfw/BpbmyuMaoLdrSYGm6NXXcEX77yXa6+6CtdXNFpNRlFKvbMPKwjQWYFl\nW0ztkprvs2spJJtOyJXAkYbxqM+ew4cZ64w0HuPZEmEpbNuhzDMW2i2+8sUvMducYXtri6Qs8IMa\nX7zvYa68+jrOnDhNogUmNoTThEYQcvzUSdozHZLpFKUklrTJoghPSWphndVxn1CGWN0pnRjCCK69\n7Q28w55jTSSs3vsQF938Rnp9gZIxkjrSyJ2eloI0zXfiDhtZFtW1ME1AChzHQeuqlaIo9csxiG3b\nL4MgvpGAKahkP5ZlMY6zSt0sLMhy9D+ANfpGakueV7GEbVXE+7Isse2qwe1lLZllVdjWnRqKpRzQ\nkhKDLjVJGqOU2iGIVno123NRtoUXBGjKlzNl/9j4//uwbAghdhlj1oUQi8BLkfUrLfL27Lz298Zv\n/u7vkEzGuLbijVdcxJtuvom5YBYznZB1N+g4NtlwjCSnpQWr212Gxx28xkJlKpRmuMIimfRpKMl0\nMsBXF/45jl2yd2meA0cOUSgL17N4403XQJxzw5tvRmcpZZLxiU98nBuvvJRmIMimJWUhuOHo1Uz6\n68gsxRJguR6uN8P2OCNFMlodkuUTOqVFWmhsx2UjzojWV+n2NklHY5Z2z6Fci7m5OS5+3VF0muAI\nw7RIcWyBshWO7WDKklFvmysuvoQH7nuc4+fOY9db6M2Epd0X8fijz9GZmaXTqpFGEyajFG1ChtOS\n1bObzMw0QFtsrG/iWQqTJfTXVpg/sJcyd0mynM7iIYpPfY3gWEKyZFGGHda6Z7hx8R3MHPGYDrao\n+d7LHZxCSHzPr+73O+ZQcZ6S7sjYX9pYaZoSJ9WV9OUC3zekaIuiqDhhQiBFRVLJddU3E4R1imiC\nKS/UZE0mVeXfaI3luigjyYps50EwOI4FaBzHYzqdkqYpSZIQ+FUPTVmWNFotUlHFR0qp6vqnviF+\n0jmOZfHCuXVOrGwABv0/SUj5N8APAP/7zvdPfcPr/1UI8btU16+jwEP/0Bv88q/8W7ZWljHpBJEm\nrC6foz4zz9zMLkJ/AVPmOI6mzIfkhWbP0UOsbneZ2dshS3OEVMRZRpZOKaMU3wnQXFhUshwXpywo\nigS7VJRaINEoyyYXCUyGFHaIqwy76w6ua1EGhjwqMdMeDc8j1+AkCRvZBH/OZ74TMC1GtHbVsHAI\nnIwsh6IsuajuEssZNmaaBGXBfDNA1Xw0GmFS8jQGUVILIZ4k5AlIP6PZqJFOSh559HFePHWGQoWY\nwubsmdN4vs/hPfuQ2qDzHMtxSbRmfW2dZtCkuzlgZa2ytDObE66/7BjFqIcdOMyogGfOnMeuBXwt\nOsfkzEnqT41p/tQio9ObLK+fYLTxIOlggGzcQDi3n9xMsUtJluZYllt9OoucaZwzmIwQlsKyLeId\nRzUpqg34jYyxl5ICL50OeZ5jdnhglJVzl6NUlSQpDa9M/+Q7rQFGG3Q8oXQkuRborGotfukqFScT\npJQ4bg2x49GSpSl5XpUiHMfBcmzyHW1ZWmjKNKEocxQ2g2mfpYUWu+drBGEdx3H5/L2P/aOb/r8n\ndfxxqmB+VghxDvi3wG8Cfy6E+GF2UscAxphnhRB/DjwLFMBPmH9ET6OEoFGrYQIPbQTtvYdIM0PQ\nmWWUxDz11JO4xhA6kt2HDmMM2Ic7vLi6ijaKYjomiSOyIiHujbC0ZjAZXDDHp798F51aDWyLhYV5\nXNsjTlPa83NIpfCkTSEVs/v3YbwQ6fnYYRM7zknTIa4UYEdYRlFLUo4cnketdal5Fo520FohS03D\ntclMQZom+KqKiRh2SeIJtdDDdRRFlmDJkrxIyYZD0qxkbnEv08mYXq9HmhVIadEdT/HnFxhPE3a1\nW+xvN7CQWEphScnaZEJSKupWg6yA0kgG0ymT7gAntNgaRKT9IUv+HFEcM78wz/nNNWgHjG7Yx/z8\nPoJOh4989EO8/uAx0myE0zpM48AiAgtHN9jefg7LqlBHVXarBCNwPQ+kRGuDZYmXC3vWTqbJsizK\nsrwAomfbdmVhZ1kgBNJUbmy6LMnzAqUl+hX2dMKUpHGEa7kgbBQ52DbG2OiyqHC+WuM6XnWSKSjy\nkrKoTriXTg/LsiiyHNd1iaYRvu8zHI2p1WqkWVk5rymJlA5FURAGwas+C/892bC/Z566M771H/n9\nDwMf/v96XyGgiKZYWYoaJXSnCXPzuxhGU4zR3H77O/nzT/wZt1x/Ha12B6fVrGKL2Qnd9Q3aFx8h\n6W4hQ49nnn6CK/btY2t5Fe7++hzX7j+C63nYnkeBIZjby2AyoD2zQD4c4IU1iiThhlveipunRFnB\n+Y11NnpDPvnxv+LH/9n7GPTXWV/vsbB7iaNXXUOcpoQzAfE0Is8kZ86cZWF+hkDYuK5Pvzdk3yVH\n2Tyb0ugEZImh4TqkJidKUxxLo22XWrtGlGdMkwwHyXSa02i0OXzsYk6eG0KacdHuReaEQSlBXlZ1\ni9x1KJCMk5QESaYlXqNB5Lt0JxNOnDzDJcf2MxgndHa5xHGJELC5sQk1n7X+Fm92u/z6r36Ic8cN\nT538M97ytjqOu0GafxnLvp6Z+UsRpCgVgtRoU6KzioqsdYLWlSJa7DiHvXSqvETyfymmUEqR5xXF\nUwhJkiT4jkuWVoF/aQyOpUiSC9W+RkiypKB0JcJIGgYKnSIchdiZJ89zdJnjOA7j8QTP84jjKmYN\nw7CC55UlruuidYVuyvMcIQRJkpDnKWAYDSueWF7kDLtbr7pnXzu5i84ok4TQc5nqAb6rGWyfp9Fq\nIZ2AJx64l2tvvIH2rIdROSURBWAJzdzueZJJD2FX0IJWrYbMEhZbF2rDOp4i1RHjbhfH9+iPJhSN\nOr3VhBnhsjXcpBa4jNcHlIVhezphz+ISTql513vew4F9S8h9HTYPZ9Rcm1Ll7F1YYFiMIc6Zm92F\nP7dIMhkx1+6QJAl7l/YyyTOWXneYctKn6VkU8YQ0GmKRM00n5K6LNIa4P2bUH+CHrYqemRhWl8/w\nlutu5cWnnmZeaRwrQCoHnacoXxJMU+JS4ylVIX2KktxUbK+Glkx1wnOnl3nHrW9mc2uNwg2Y7XSI\nkzHFdsQNFx+hdWlAOvgqxy5/M/3kvUSnfh8jfxJ59PUYbgMRc+u33c7nPnMnUlUBsusoTFGSGwsp\nKw9K27ZJkuRlSr7RO1QLvo5IfanfZTga7iiGp4R+gFCSMs8xlmDyCrhdZ9fBKi0kJePuJg3HYZTH\nCE9BUWXaiqIgTxPQVWq7yLKdXhxDlmU4jkO+I7osyxLbsnAcpwrwq7fGcRWCirtsJPjWq8ulXjMx\nlZEuTz/3LMtnllnZ2mazP+Xs2bM8/OjDrK6vcebkCRZmG6QTg85d0khXvQ9CUSRTXM/BEgLhCR6/\n/yG0UUzkhdmMTAq8Thvb9QlsjyAMWT79PMpM0FZMjRy7nLKr3mTY20ToiHS8xt45icpHGGMRRxkz\nFlh6ihl1OT9dpxgOmAkc4tEWcW+daX+V8dYyWb9L2u0z5/gUgwllLiqYnMwwlkWcCQw+tuMwHk3Z\n3urTbC1QZDnZZMj9D97PaDLlxaceYVdgUffr1D0Hz7ZwnRCTuXhuDdvysaULxuBIgSMkFjahY9Px\nHOZqddY3N/D8GmY6xSMnoAJ1iMUaz37tOJ/7xF1M1p/h+fv+nN//Px7n/Pp1ePYfkCdLnHz4Yc4s\nP8Xbv+N2hr0NJrHmN//wt7n6+iV+9Cd/CM+tjICyzCAthet7eIGPdGws29phHwvyvEQIRZEZhNXE\nCudoLx7EbS+ianN4nSXcxm72HbrignXbvfcQl7/hOt70ptvxfZfSUwgrIIsjkiglTwuEkfhBC6lC\narU5mq1ZOjMzeL5bVf2lwFgWuTa02h2ksoniFCEttBEoy0MIFyMgyWKSLGYcf5P24Buj6bTa+I0G\nbqfG4q5dbA/6bG722Hf0KIuH92FJiWmHJNtdQPHc8yvouOSSKy4miQqyVOOMC6658WbWooThaHLB\nHB/7uzsxesp4FEOhabRmWT57irt8j3azRadVY9AfMDszy9WXXIYUsNlL6Eea7V6ffhYzPzuH0BkU\nJTXjIZVmuL5OqTzqjQDbDchXS3RR3c+jNMGaDMEYlGTnqmBTpBm27SCFZDKaIIykudBB5xlZPGLc\n6/L2276NF194nnG3y2xYxyXGtZqkWhN6LlJq4jzHsxSFbeGVFhka37JIigJHCtCSWhgwHA5pNBoU\nRUGSxszPzTEcDfnaVx/kzd9xA8nIZWNrlTRc4V9++D/jNg7RGyj8IGf/6/aw8MWEJ+65n7oTkFmK\nj/zGz/PjP/0+fuPf/Dt8t0McJTiOR15WsVZZVo1fllXB7F4K9oUQBKHPRa+7ksk0wvc8giBgfn6e\nMAzJtWFzcxO+Yemefv559LPPc3DvYZRjg9S4foDIC4RtvXxyWJZ8WWavlKQozE5/i0thNOMkocxy\n1tfWX5bguG5Vi7NdB6lBo/HDgAwHk/6Py13+pwwpFTMzszRsh3TUhfVzTDf7bG9ssWbD7OwC0bhP\nPumRTjWzu3dz2aFjfOnLd6L1Ufx2AzeH4foa+xaXSJIJs53WBTHLntl53vnPvw+tFOgSU+RkUYxK\nYhKTUa/XSeKcJx5+kt2XXYnTDjDSkJQJuy++DF2klJbAcUISAdKp0z97EsezUAJyZdGo13GX9lMk\nEWWWok2CLhJ8zyXLEzzPpUwzhLKYjCNCv4YuKqu2Whjw/JNPo3QGecz51dOM19Zo+g4uhsC1sSyN\nQDHNEtBUQaywUBJ820ZqTazBUQ5loSuS5GTKzK4W9dDDd3YxmYzpdTdYWFhEKofHv/ooWTlkYXgd\n1138Cxw//xxHay3ajQ7PfflOxs8c5/pr3sjifgdtTfmLv/4K73rn9/Drv/x7jLoBt3/r6/ny3fdS\nlsvP61IAACAASURBVBnTaUwSV6annudXyGO5E1zvpI0znRHUmnRml/jyl76IVFWjmu/71FttOjOd\nC+44b3/b+7BUJaP/3N88RyoL0qxESIPZKURmWUYUVQmIl1LHaZoShmElyTE7LLJaHRtZSWx2hhCC\naRpTujZK7tz4lIt0X72E/9rFLBi2t7exZ2YIO3WG68vUO22sXp8Qi7S3RZEVCJMxHvTYvW+J4fp5\nXE+SZGMcXAoMniNIoj6+AGVdmIM8cunhCsRAhlSCwtJYeUGRxvg1lyKf4kgXqSEZTjBFQlgLMEWG\nsCVRqhG+xyROsSYarxOw/MDjzNRslKdoNGfYOPH8TnHRw3GsCkRnBxRFhm1XfvNGS5IoxfdDSgNu\n0MRSgkGvS6AUlmUTmwJSgyUUtu2jkRglENKgtMFSEimyHTvvEscSlKVFWmYIXYIWSFmRGxGCuh8y\n2O5Sq9WqrGNZkCZTIKfWbpImCnts4Tk2szqgLjxiMcvhm97CVrTJdjdhZmHIkyvP8oYb3sjy+RO8\n4x0f4NN/8SmufP0x7nvgQaIIlBJYygIUeb5TeZf6ZXdm17UrCYzrEoQNLr/iavYd2lPZZhiDVDaW\nsuH419etOxhgKYdds3NMoghsTZwaTBmjtIVlff3rJf1aURSEYfhynFSUBZZbac8C293piCxIkqQq\npoY+k+mUIHAR2iAwdML6q+7Y187MCMFmf8Tq8mkMKQszTaKtmGdWNhkNpoyTbeZb83jagLQ4cfos\nolbj5KlTzCwtsrHex7YdmrUGQqlKERtdWGfpJQU5BqUcdJljpTl5nuN6LpQaoxRJOcbpBLQP7SMf\ndslMSZGnyEwQdXt4ZYdQSbK6jw4EM3MzhEJjlMazLYzjkmcFWR4xnJTMLcyjpUuZF/i2iykzknRK\nEHhou2AyHCJFCUYxHg/IihTjephmh7VnXsDdobdgqiq6ES6lKSk0ICpL8DKXKHKkneOgEIVDrlKM\nztFoCq0Zjyf4dZ9wR7k7N9Opsk7SoNMMz3PYiM7Tv/MOZg/brJ97is6uH0M6Id61HnNxQGAfZUDG\nsvF4y7d8O1+75w7edMstrG33ycscpQJ8r0QIBy2qDVvoogJ851UxcDweI6RECQdLKZotn1F/wGOP\nPcZoNKbfHzCdTitV4c74v/7jf+D7/+kPMrtrF7b0UCbFpAOU42AsQ5pF2HaNQheYUr9ceMwL8XLa\n2rIssvEUrTX9KN6RyFgEgY/rOpAXGMfB7OzFsOaTq2/SmEVjUQYNrr/+RixSTFGQFhmXX38TJtbE\nMiX0auh4RC0IOXnyNIePXMSehaUKbaM1ruXQm6Y8/7VH2DXbQL9CSLn54kk+e+oEqcmxfY80LfHc\ngHqzDkVJoksarRoba30eefw/MVsLiPMU23UrA9VsSpHl2GEIQnHRxUc5F4+J+t3qATq7TFgLkUZQ\nFAWNRgsjoMgTPNcmL2KiaIQjJaYoAU0YeKytnqPINaHn0NUFNhD3htRtjyiKkV5IrhWJtsmLtKpV\nFBGBkhSyxAibwkg8A4UuyHzFJJowjEHrjIZvkUdTGs0a0WRK3XWxHJcgCDi/uUWt0WCUJmz1+uxf\nmGf73DZZLUD1v8wzDz3Ere/4OabyBUYbq8y3l4iydR594S46e3NufeetDDamPP/08zzy2GmErlK5\nRr7kpFVJ4JWlKq96DGUByq1uAsYosizn+PHjSGlIkuLvGW5VJ5KDLqpOSKlKLNtDKJs0niKRDMdT\n3NCjKDSFBiNFVZz2fIyArDDYVqUstiwLU2YYUaC1ANswSSoIoJQVZD1LY1z3m9T5SxhNx3fJNtZw\nHIUM6tjaYJItdFIyTgr8hsaMVom6BfvrIWK0ipQeWVFpixCC2VbIDd9yM2V3vSpifcP41re+Cdev\n4QYepag29BNPPMf1N7+ZLE3QsgSdIkpJmRuKLMZvdxBeQDIe8sAXP8Mtb7mFYQJ11yHu93D3HKJ2\n9TVo28OWFo35ReLuCooEy69R9Hs7gLiCIjc4toSdanI0ThGmxLIM/d6Aiy+6jGE0YTqM6K9vMdOZ\nIYpyEmERJSkmymg0fPykxEXgG0OgShK9U/XWJcq1OXXiNINJnz37DmJkyKC/Rj2cZTIeM9tpY0nF\ndDxCug6LuxbZ3O5jhyGBcpicP8fuA5KLjjaw5PXc9t23MRo8Ta+7he9kzLRmUa0ttlbWWHBtNk4/\nyKlTCT/8Ez/E8Df/kKcffwZjAoxUO3KWEiE00naq4F9AkmRV+huFY/vEUVZ5UiqJ1gXGXJiULYuS\nIAgY9LdxLRClJjUCkZdIqp78tMjJ8pIsy7FtizIvUQiSKEJCZVRbxkgpsZQkLRVu2CBJKh8fYVk7\nxVKJp22ka6Hsb9IA31Aw26nhlAmjpKDRcCioIAWZVJzd2GLP/qOgY4a9LvXOAuM4QTglk+GQmcV5\nBGBkhO969IsC9QrVcbPVIi4SUp1gI7H9OkoLpmlEPq0+sTytUZToNEcoB1PEZFGGbbvMzsyRpBPc\n+jyWhHqo8HSDIitwZIqFYfjMoxDa5MqQd3sEbqX1QpZoSiylGE0m1MMaZZGSxymzM4v0e2P62z3E\nDls4n0ak0qEQgheXj3Pk8FFwPc6snGNtZYXdc7Mc3b3ATOhh2Q6jacKwiNjaGrO4MMNcp4VvCSzb\nsLh/iY3xFMfV7J5foCgyglqNHE2aRrTrIVgOvrBwzIRsvcfqgzHjxl8iWzNcsXQJI89lkm3Q8prM\ndRZYXHobrtTk6Sk+9kd/ykynwbu/+708+8TzaCojoTzPUZYgKzKKaYnQJa7vMzElBkVZGISlmEwm\nleGroTqJ8gs/0Y0GIV2EsYijuOqudNwdqwlJnBdIy65UDZaLNobQ9ymKKogXjiIfxwjLcHp5HWNJ\nvu1t7yJohkhdMuwN2TzzLMLkmNImMzlSK3T0TdpWLKTk3OoaT5x6gbl9eyi3NqgHNUbbfV549jj1\n5gwfe+YZmnbJzMwcZ9Z6+GHIuNSEvs9oukJ/NMCuUv7kk5hGvXHBHJPplFq7XREsTYVaGo5jfG3j\nzjTIoiGIJsICx44QScnK2TX2HDoISBaX9hB25sANMFFMkibkpkRIh0mWgcmIdEkDH8sYkIJ40iMM\nQsoiQ5iU0bAHaHSZM42nBJbPYKLZNTPDubNnsWt1Up2jdQG2pF1KDh85wrQs2FjvMtgc0ah3aAYB\nXpHiey2iwmKiC3qFYHOYQdxjYzBAGji02GahXSewLHq9bUaTMdPpmPmFWTzbruQoZYpje+Q5lFpQ\nl21m65ew0XuI66+5heloilU7z0xjkf5WFz9s0518gj2tD9LdvpuZmSa/97u/RRovMJxKgqCsoHay\noki+xAjLy5KsNOiCqgygNcZSdAe9qjELgdE5rxSmK2mhhE00GVOrNaqemrJAKJCWwpEKy1bk6Y5j\nsjHoMscTFslkQhkL8jwjtQXDXDKJC/7kE5/m/e9/P3fefR9l3kSqvRRGkqRxxSkzJZhXB568dqlj\nA7sXdzErDcqx6U2rKvjlV9/IwoGLsYqCvUu7kcKQ5RmT0RhlS+pphJNmlFnO/ksuIo5Bi6p91Zbm\ngm6ax58+wYFDh9HTEcPhgLAVsn/3DM89/RA2JcMkp9/vs7yyzqF9+7j02GF2zS+QbPdxZju4nssk\nK0lGfYgi0qlGa4MTVp+iG8sr5EWGOFcQOhahr5ibbe3cfy2yOKMWBOTJDhNrx6Yc4TDc2kIlE4yU\nWMawNDtDvz9g1neZRClPrvSJ8FhOLZxY05v0KBZqtOqzBNImLQ2n1vtME8P2YJvUFLi1FltnNrk0\nz9jX9CjimGg0YGFhhiyJcJwmURrj2A6IqlhqhxZ9pjy1/igLr2txfm2ZjRWPkq/Rmb2E2fZeLOkT\n2lNKdTf3P3yS9/7g/8qffPJuymmJdGP60wlZmnPrW97KqZMncbWkoODd/+R7Ob+5wdpKF+GVUBpk\nKhkMthFoEBWf4JVttDpPkWTE002yNAVlUeQ5RaFRlsRxXKZp5XVZb7SoBz5KKAqlEK2CzY1NjOOx\nur5GaTtIAXXf4a8+8TFi2kysWWSpUVaKqyoPoFLWKKTPq43XLnVsoB7Wqe82IDUH7TlwHaLhgCga\nsOBb9DfO4IQdwjDAU4ZiOmEyiTl25ACT1WVMNKYmArAqz/YyiS6YotauM1cLSIuIuYPzCGkjjMc0\nmuD5ksWsQB3dy+F9B1g/v4JvYvKtFZQGXUwIVU5ydp1GYxZT5tTaDb585x3ceM1llFHG3vldGCEJ\nQ4diMsB1JUk0whISN3DI8pyszPCDkLVzq1hK4liKtZUzJL0+tuOCkDhSMO71qHshxnE4eX7Aid6Y\nqR4T2ZJ99VlMWRCJBv04o+G55JMIx68zysfsbc7j2Qqd5XidFq7MkUJTr/uMRyOkquoegV/DkQ5F\nVgBRFYA7gqNXHGb36xeQqiSPW1x5tcVo89t5YfM8q5tf5NhRTSN8J6J8L+95zyJFIXnHd74Psgmf\n+evPI5ih0WoziWI6c/PccONNvPs7v5P1tXWOHDyEbDYpAKkFo8GYtbVlRv1N0iShVvcp8wsfl7mZ\nJh//r/8PSRKjjCbPStI0xnY9ymJMksRVihxDkZ5Gm5ISQylN5Tpsq6rdOAgQhYNOUkrhEI/GZKIg\nDzvIdEASn2ZaTHBFh3D+dUi1wKuN17DOIvDDkF73PI1mQC4N8XCAE9pE4xFOczeulMTpFOVZRMM+\nrdkG5WgbsgwvaJAUKb4vmCbjysH3Fcd57/QpzMIcrmchjCCNE4wSSEcRDQcErks5TpDGQClohG20\nkMRxhMkysjwjCJvk05gkmeL6bdJhDFFKuxmQGYmtJdNRjzKboguFzkvwbTKtUbaFJKd7/jyIFN+t\nkfaHiGmPsB6Ql4Ykz+iubxC4Vd1gnGhs12ax6YLyUDn4lk09CGjZNoHjUJM2HbdBoyyYqhSdldhG\nUnMDpBIEnoNwDK6eYAlQUlDkKVE6wfd9hFEYYXAdA6VHb3NA/4UhV139NqQbk5YPINonsTZez2UX\n/SSnzv4Fs0dfR16c5KH77uKLf/kijz37BX7xV/+YD/z4z/Pe73gXRIp3f8/38sabr8OSFqe+9gj9\njTVefPIRLL/BpTe+nrm9+2nO1PiXP/NTfPHLd9Db3CLLU/JX9LOsrKxhpMZ2bIoixws7HLvkMm79\nllvYt/8AjUazQrtGOU8/+SSf+rM/J4tjSqsgydIdILhEGYVQ4Dl2xSDzOgiVIpgjj1+gU68hVIAu\nbWLbI416r7pjX7vUsTREScxTTz5FpDXCkYgCDBaXX3EF9z/8JNvjHo7lYSU5bd9mFA1YG0XcOf0C\ngeuxZ/8+1tbXkFRSk/IV//QTp06zvdUnyzJqYQgKZL2FKUs6oc9k2MO1XCIhObfZ5VzvPIuLu8ny\nAk9AI6xRRglzczNoS2BFU6K0ZHl9E7snka6Pa/ukUYSlSopsymynUxn+FBlKiEri4Tgk8QjPdRjG\nm3iuw7nzm3i1OgcOHuPJe79GU1Qs4cBIjtQ8DvhNpOWzHsXEWUmoFDNBQNN1UIVh3q+zko7pBy5y\nx4UXIXFV5d6lC43BkKcpQtewLcV0OmFmdob+cIC0JYHnoUSdaTbk26+b4cTpj3HwwPuIRrczHN6I\n37yfSfQ4lx/6EXrRXTz2gMs1l1zOZ+SXuOr1l3N29Qv8q5/4Icaxxx//we9R9y1OPfwMu/Yc4PGv\nPcNFl15Eu7lAFI2Ybqyze8++ytbOtfjYJz7O937HOxmPReVn+Q1DuQ7XXPtG/pd/9VNcc82VbK52\nWV/vEtZd4skIrcFrNNhz+T5ueutNvPu738sv/exP09/cIkniiqVsDEmhSbNkp+++JE0sRCCqqxmG\nPCnQIsOWAilsorT7qnv2NcO3pjoh3e7C9jrGsdlYPY9yHeIooeHWka2Q1t69bK6s05GKlaceQYqc\n+cNX0JztsN3dxlKSwbCH7/ssLy9zxRWXUfvoVS/PE73/KUxRkMQxQb3G+vk1li6/kmh9A1PEOI7F\nNMnI8wwHRc31kCanu7lOUQsZdfvMzjSo2TZW6FMYRVFmBKGPsm10alCOw8MP3c+h/UsIMiQZstiu\nbPrilFGvT63uI/2QZDRmY/kM25vrWE4dbRyk9HjqnvuYV2AFIaktKHOFwCOONUiFNhbNwGM+VMz5\nPqKQrAw0D6xt0fUhzjNcY+NbEkcJXEegRIaQCVqU1Fs1LNui2enQmZ0hShOQJY2GT55CZ9+UcOkz\nLCzeTuQe58HPNbjt7b/KfQ/ex0XHerQWnmPY/y4OLbyBc2t305mfwypez0984Id55vH7mBY1/vl3\nfxct28VWHaZRyiAe0+w0+NZbb+XOL/0dt7z1ZvZdeQ1BZwZDSTTN+Rff989YOXueKC544vuffHnd\nNn+0yyf/4u/46n2PMFpfJ+7FNEKbvXO7EcIFqQgbTf71hz7Is8+/wBtvuQokfPavP82f/N9/xGR7\nQFqW5FRFyl6/y+xci34vw3gWsvYWZP8eMAlJMcESPt7Sm9jYOMH501/95sO3gsEOfMqeIbA03kyN\ncZqx78AusjSjFw2wsjmiYQ8PzUJoY5IEN54S9wtCJVEUiNBhONhmdfkER/fOXTDDxolncX2XLCs4\n89wme/btR4971D3J9vkubr2GLW1qtiHPxyBBxhNma7BhEtZXXyQ0c1iuzdrTXfbs3oXnemyfm2C0\nIZumnB0MOHzxMVxKMBlFPiKNB5CXZNOkQj1RMuoPsYxBioL5XbvY7I3x/BqP3PcorlYoVeBKTWga\n5MpinBmUZVFTEicIqNk2LV8Rmwzfq+HXLDqegxY5iRNgaxvHkVi2QImq18OybbKiJIoiOp0WyXQK\nzRa+61KQY0nYtc+lfmzE7P6LufNvQ9p7C975zt2Uzt9xyaUBQbhENOxwcOkW4knKM8+e4S3tt/C/\nffit/OKHfo5/8u7jOFnCrAtJHLGxnWIVGSMT8+yJJ3j3976Td3z/9+Aqge25pDu22q5r84M/+iNc\n9rpLefDBJ/ie0+97ed0++4kv8Td/+lmGyZSD821mQ4elRo143CcSFgiLeDLgD37lD/jAz3yQ8UZK\nOO9x+3u+k3d81/t425tuRU8iBFUXp5IuQiqEFNhWrSolKEGaVoazStkIrbDk3yN2XTBeQ96prGAH\nXo0SidVqEzaadLeG4PgobZh2h/RXVgg8j/bCHLVOC6vhM8gjpOcw2R5RxiWBqtNZ2E97Yd8FM7Sb\nLSyvCbZLzbc4v7GGDD1yZbBdhzgtKQvNJDckpcDkKaPJiK3BgByLJM9ptmfJ85JGy+XkymlePHcW\nWQrOLa8xGI65+JJLiJIxo9GQLE0g1zjCw2QJzVaLAovpqFIojIYjmrUF0tLGs0O2zp3FZGNa7aBS\n5xpDy1UstVscW5rjyFyHphfiK5d4EhFPpggEjSCg5vrM1mq0HZdyOqIjE9qWpibBEwZHSnRaYtkC\n37dRjoOkxIiC8WBILQwYpyXPnl7l2cc3OP7wpbz5Xfu55KjLytkvkFj/kVZ9Blt1GY7uoExHnF15\nkF3zbUaT3+GDP/IDHDpwG65lc/MbriQtNBsb2+g0ZxCN0bYgCF3+029/hIf/9lPMLO6iEGVlq6EF\n21nKbW+9nefuv5cv/YePXbBuj33yyxxeaDFTb+IYgTY5WZESZWMcneOUhigasPzCU1gdyTMPPA2l\nwShNXoz49N2f5fU33YTJBK7lUZYaiY9lIgoCTDlBiJQqR2AjtSAVFpTfpP4sQggMijjP2R4OcX0f\nS4Ezu8gn/ua/kYyHXP/W21g4cpCvPPEoV1x6Bf78UYbDIcL1uPu+B1lot+nFU1q1FjNLe3jyxJkL\n5nhopceRg/uwF5cosgnlcEj3xbNkSQRJBLbDyZNP8vyJZdxGwOJsnT3tNiK0aNsub7jpJmq1gHCm\ng3IDio01yjjlsWef56Kjh4iSlGBXm3Qrx8oLlMqrnvUdtS3GYNvODlPXIgxD4t6QRj2gFyUURc7u\n+QXkIKKmHDxh40mB7YBybITQ6BL6owlZPObAwl6c0Ke7sYznNfGtKcONNZphC6kjHA2W61JISV5a\nROMMZWuk9EBILOWQ5zmObzEdTGi36kjnHIcvmnDv3fdx+A3X8tzj8+zddy35+iJG9SGIOLbvZ3jw\nmfdxZM+7aJWCpx/Nac6N+Xf/579A+RPm50JGw6o/PytSSi9gbXOj8tTcHiGOv8CN6xvMHzyEERJp\nDLXQZXL8HLe8/3s4evFV/OHnPvnyuuWlRVAWLIWCU1s92rYgd2zKKMUENihBlCUYUj7yy7/G+nKf\nZ08c5/0//d1I26WU8KHf+TC/86u/yT1331ORaCyFctsU9jyIJrq+D5GBLDS5UhRlCo3Oq+7Z1675\nyxikUgil2HfwMDNzcwS1Bq2FBd7zT7+PH/yJH+HQkb3snp/ntre9g9nF3QT1JocvuYyyyLnx+ms5\nfOwA1152jEuO7eOiPbt43VLrgjm+5ebLCMt1GsPn2dvMOHI4oOn3mWtFzMynzM9m3Hj9Et/3A7cx\n33C5evchDjfbzKYSPRhgxwXrJ8+yfW6TlRdOUrNcvvbQY1x73U3Yrktjts1kq4ez4y2npMRQ2bkJ\nCcNBrzLgKXKCWojWmiAIGG5vs7gwx0yjjjEl/dEQLcDzHfzAwRYlggIpcpJ0ymg6xHIUg+E2L544\nySiacm77PDma2V27sT2XTChEkaDymNC2cO2Kxug6LkopWq0OnlujKDXGZAiV0Btuc3a54PBlP8to\ntMgdX+iirHMYL+fOzz+GV2tzfu1O+uVnufHKT5JOXqAs27Q6l/OXH3+Wuz6/wrRvE9qCelBDl4LE\nTEmLiLi3TduxsWzNsSuv4tzZ8+gcYgTaQL0seeCP/5QUOHjlRResW6vmU8Q5HQnSkgjPZ1hkbKQT\ntvMR65N10jLj9KDHC488ze3f/3bOH1/nt3/hNygyDQYcS/Jzv/qzfOqO/0a91ajImMLCtmxQCZk6\nSOkcwPjHKN3DSAIawe5X3bOv3TVMVKrayXiE0RmCAj9w0GSMtrcox31UXtVNTJRh5QVWMiaxUqwy\nxpUFHikqHSPiCXrYJR4PL5yisUCOojAaWWuSZwLLeMjCRZYuZa5wpUuawMzsPFkx5Xx3C2MJ9GBC\nvD0gUD7TQYyexqycOMlsu04Wj6vmLiGxckMZa5IkwVI2xpQIBLblEtYaWK7P0uIiaImUgixPaNTr\n9Ltd2u16VY+hJC4TClVZJORpzGjcZxpNiYZj8mRCs13Hdj1m5juklstGotkqCgZZwViXlEGN8TRC\nFwXoHC0qz5GiLDFaMOj1KAG1w/8tS8mLK+uMjMVX7/kMx6cvkCf7uOIKzXNPHue62ySby2fYXf8g\ns+GPkZgtOntjnjv+FS697N18/nNfJUsLdJnieQ7DYZ+yLKnXLXr9NXxbsHXuNLtbIQePHOXya65B\nOjbBTudxZAxv/qkfwhKGIplesG5lMWQcpXjSYb7dZJIMON1d5dSozzPrZ5jkMZ5rYzfnGU0H/Jf/\n8id0J2ucf+wkvbUuyXhaAfwsg+UovvLQPcy1W1giw8qnZKOTWIM7sUd3I7b/Frv/VcrsNOONfxBE\n9PJ4zR4WbQwCQRTnnD65zKkTp9ne3mbj/CrpeMqZM+usbwzor58nKmI2N9ZI05Kyn3D85DmeO7NK\nv1Sc7cWsbicMhc0kvfDO2RtOSI1FnJZ0J4aN7YRTa1scP7XCqZNbnF7tcfzcFtvDIeMy5cnlZZ5b\nW+PR02c4sbXOydWznNo4x/LWGmvbXTzPI00TTp49y2p3m5OnzvLC2honz6/j2CHTOCEIXIyS5EWJ\nsl2U45DmGUgLoQR5GeMHIUWSMuoNSOMpruPvqHYLslwSxwnpNGJrNGW1t8HMUhMocKRPmhqUVQNs\nnEzjjxOasUaVhrLdZn04QggQSpCZBEwFnPM9F+laJHmJstpsbA3xF1YRQYSX9fjArft40+WbDE/2\n0XnKHXeMUPWYqHyEbvcrIB7Fl7fw0T84xbVXXwK6TpkL6jWbKEuphQ3m2k327WpQq1kYR5GjCWyH\nycnjDKcbDEY9clNgpMBWDv7Cbkqtefwz91+wbireItM9TJ7hCDi5eZrCElx27FLmw1k6Xp1ZCUv1\nBq7jEA5ybv62t5BOpvzKB3+aMGzzr3/k36CMVbGSmfLHH/0TrvuW1+PbDZzCoWkVWFhoUaCNja3a\n1OU3qZASDNqAbVnsme9UokPfQUoL0Z5HCInxQ158/AF830KVEaMUnLHktne8C5zK+3Dm4GVISpRt\nM9zahDu+PoNrw9LBo0h1hFLV2HVUokWOzEvuvesrvGH/USxHUQqbPQeOEYYuRX9Arg2ubdPtj7j/\n3q/yttu/DTGJyI0mbNQIPZ/tzQ0Su8RtNzmx/CKthUsx5RBBQtLv4WiBMCW6KFDSkEwjfCdgWFQn\nqmVZ9Hp99h86zF2nH0YYw34hMCYnKwsyI1nvbuEthoRByBw1tp45xcZqj6jQxNIimSbYjRYDKZmQ\n4C8EHJxrVw4DWAgUSiksxyKJExwlcVwPaaDVbHPt26/ij37/EbZ2nebSQ03uf26FO/8i4tf+/Ztp\nr53gxItdFmeP4Ng9dLbJb334CR5/PMPIBlGikFJTmMpGwrUlSVkQT3OO7tnLl+64ixve+AZC14NU\n8/SX7uPiq67G3W0xsiWtZpNCGvzCoXtm7YKdIfUUkU3IiybG8pj3OxxsLEBUIIwmKjPcQtIqxxy+\n9mrWukNOfOFRptOCdsPmgXvuYbI25Q9/7aP82M9/AO0JRN3wGx/593zkt/4zX7yzT75tozKBLSwK\n2wEjkOLCFo9Xjte0gg+VpqtIItI0IRQ1hqMBM41ZslwCknPHj1NbalMIzcyuo0TxCpO1dZQTIoRD\naYWobEJaWozi7IIZHEvgeA0mkzEiHqOERvgKt+Zz3RsuwUoGFMbH0SUiz0l7XRwkKstZH20wn4xr\n7QAAIABJREFUwsMNm2yuncGRHou7d1NkKa2ZBnkypjg7xgoLNs+dY7C9Sc0GVEng1imLgrAekJQp\no8GASaJJsQi8GmWRU+SVbUaUZqSlxzCNsR0FCLSSRFHlrjtYTfn0A/fxxssvJgwEu64+ytyeAzz2\nzPPc+OabeGTtPLUo4/E7Ps/N9QU8z628WhwLx7XwPK+is/gOUiriOKLenmF7PeOOvzqBY+/lviee\n4mNfGaBrh/ilX3g7wzjnUx99hsuuvh2jx7zuokUef1By/Ml7+bVf/35+8Rf/EC00xlhsdRNG20Ok\nitm/7wDnls+ijeHSSy+iSDK+75d+ki99/kvkz69x/OyIq95qkfgu6uo2whRMeiN6mwOY+fq6TScD\nnHqTTFmc317GDUtcMyXPBa12m83e/8vce0ZJlpR3+k/E9emzvK+u9t3TPd57YGZgsIOVQRJiFyG0\nkkC72pU5QtJZkPT/CwkJ0AoJs7III4RnYBjHeG/b++7yJjMr/fX3xn7Imp5pVnCkZc+ZjU9VmXUq\nTmbEe+ON1/yeKk4WkBlqiysMT42y/8BBNl9yMYtHnuOTH/1rxie3UT/Z5Y6/vpfF9hxyUHDVdVfz\n337zPbzvPW+h2a7zjtf8JJo0SC2HJOq1DPyo8bIZiyY0EGqjLwXQNFzXJ18o4YUeMZKcLEDUZWBw\ninpjjahRI2vbIC0UOoZhIdIQYeroqY7otM6ZQ0mNwIvA83EyJkEgoJOAndJpR5QGxtCkQxR00aVP\nhE82n+O5R55l257dDJZGeeLAcYTSGBgfwotiYgRz80scPnGCnRMzCM3AVBqOk0FXvWJBJ5uhE3Zx\nvQgvCtENia00CFPiJKHTbQECK1+kUmmgzJTVegchTDK2jvIEmbzOtNMHCx1ONVp868gh3v/rv8KA\nYyIMh3wu5ZTuURUpD913P9s2bd84RSw0o6ezlc/nCCMfzdbJOA4xPRBpFHXI5iVVV8fp6+AYWX7j\nnW9ET9bYs7fDqeXrCNQ/Ue/MYRtlqlXJpdddwbvDgJNHDXbv3c6zz82hVIpuFJnasYfm6izdZoUU\nyOiSQl8ew87zuT/7ND/75x/CFDk+8qu/wVXlEpXaKlEY4KoQowuY527SNuAjeGL+KPNejasuuYDO\nqWUcLYNj5wmDFNdWxHpKp9Vkdl+FnVdczKnTczRbbZSK+au//XWiNKZYKqCE4tiJeSorVfrzFZIg\nYNOWLSSa6KluygxhHPcgrD9ivGzGkqQJhiaRms788jz5XI5Oy2VheQlNpaSagbBshnZfy3wClW6A\n6YacmZ+llM2xXKvgdlokBCSxIk4k45OT58zxxx/+CGMjg0iV0G5VSdKQvr5h+odGadYbmKZNcWCE\nRqPKwWefYWpsCiP0mJrZzuH9xxia8Xnbm1+D7ntUax32HzvNviee5C1veR0zu3Zh5IsU+kpcn7dQ\nmsFapcZIfwHPb6F0g3a7hQZ47TYAjuGwUmtRKJY5NXeSJFbMz87juh5uENPwQ2xDYhkWCoFlSVYL\n8JPXXs3+ux7n+T/8PO1xnfzQKIvVddx6F+Fq7CTLpsExwqQCUiB0SeL7uG6boZFhgjTGNExEKvBF\njApCdDNPJ9Bod7PAGJr1OjLZ/azN+7TaX2F68jxmxid5dt/z+JHJ/OnvcO/dj7N4SifVIuJI66Eo\nSPj63ffxivO34hgS33cpDpQxdcnVN13H1IXX8unf/yMuuOxKrr34Ak7NHuX+b93Be6+6jCI23/7i\nP2A751b7xlJnpVbFMwXDfZOsLtcxg4SBySKVMMDI5ThVX8MREpEqrKzD8/c+zC//7m9y1x3f5+hz\nB/jQL/8Gr3j1zVx6zWV86QtfQ3mCTaND/MEf/gXvfNfP8MiD9/Povsd59TXXERsFulEb0h9tLC9b\nuUsraJPRdFq1KjZdkjAg6Lbx/JAo1jh0dJbrr7wcjF6nW+L7SBJSx+HUycP05/LQ9eloEr/VJU16\n4gkXPPqWs/O033OAk4cPMrN5BqRCTxIiFGamF0IN/YAkAStf4t7v3sHOkX4GLQsBxGFCN2yTMQ2I\nYk4ur6OPjbNaXWPv9kkyFkRuD/IT+AHdbpexkQGEChGJS+B2SLwuie+Rhm0kkrzlML+wCmicOTZP\no91lfnGVuqtwXY+rN5W5aNMEpm4TpApdM1hqh1ixxlhumPWFGq2nj1HperQUeAYYEyOYE5Os6T62\nHtGXs4hVhCt8UBGFUpFMIU8UxeimQ8dvYts5MkWDoZ1lPvO5e7n08pt4w8/8ImVzEMu0eN8vXc5F\n2y7kW99/gAsvuYL9h06wcnIBP5AErkmc+ijVC/8rJCR1fvHNr6YsA6x8P/XKGtMTk7zy3T/HX3/w\nz/jA7/0+//Trv8Po9AQLScCpp4/wqrf+NLXVFkGtTWa4n5/K/bez6/YTX7iUtK/M0fkzWPk8U5PT\n7NmylccfeBDP64JhEyooZzJYSHQFIkoZ3TLFT//G+3n/u9/HpUOTGIZNIARTW/by+GMPMDCS5R3v\n+mkm9mxneus4RrZIt7nOra/7T6yvR2jpGQ4ff+7/vNxFCDFJj/o1RK9L59NKqU/8uKg82zAR9ORq\nZBoRuB0cS8MwNYSWQ5+X4NVJugGGVcRdX0fgYuSH6Ha7DOWzWDJBdVtkdYkwNLL5c9U5rG6N6cEi\nQvU4IEbWhjBEhhEZ08C2EogTnn/2MYp9/eQHBnHbNQqmQ6vuovwQX7nMnV5kYmqUzGCJRLXQVYLv\nCXTNpra4ytrqKsMjfaRhSCpCMlKiNA10k3W/heXkif0YP7WIUoNmrUbkx9SrDQIvRAgLaRhEqhfB\nG+63iCMXQ5n0WRqJAR2tiz2dwx3Yg93xIYGcFLhAIw2RoY9hm6iN4s1Abij1b7Abc7kcbbeDbVto\nmk3qwuyh04wPlJicKDJ3dJHbn/rvvPu9O7ngQotXvuI9fOO7T/Klz92B0Bw0HKIgRUkXpcTZfi1B\nRGLmOLm4xnU7JzBIyOk6/soqX/rInzHgx/ztL72PWnOduXYV37G56ZXX8vx930FIA0ca1MJl2PXi\nuknLptNo0+84ZAfK6LbOV++6k8u37SYbenhel1jPsbi0gDBNPKEwLJ2FhVn+9Lc+RE5qrIUtiqZD\nFIQ8/NhdmCLhoquu4fp3vKYXUhYpoYzJZWx+5ufeyue++DhBmIfjz/1QW/i3hI4j4D8rpc4DrgR+\nWQixixdReduBezZ+5wdQea8BPil6zQfnDJWExIDp5NF0EydXQDcdzEwRra+PyYlJjKxB7Me0fJfy\n1BTCHCBMIqLKOnG7i9I1QlIy2R5fo/uDioK5HAdOztJ2ffxuSKXdpu6GdJOUmhdxZrHJnQ88zeim\nGUYG+jk2X+HpM2vcd3KWg7U1npxd5uFj89TjkKdnZ6nUKjSbHZ48epwDp05x+NAxnj10hNNra6y2\nukSJIvFjBCClSacTkcmUcEOwnDwnT50gcNtUl9eIkxBDSob7SgwM9rNeb2KnBoHvUVtvomOQpgLd\nBNMQRH5I5IeEXhc/COj4ITU/ptr2CboxQpoYhkMiUmI6pEEXO+MQxL3OznrbxTItVAqWLZDY1PyE\nkyfnyaTLrK6e4Wd+4ePoxgf4u88ewrZytLshOiYE9KRSCSGRJKkiUhAqiJQkCOCeJ55joelx4uRx\nUi0mtXWGYh2jXsO0MiyjsW3HBfj1Nvc98DjP7D9KvbHGeqtBvfUD+TFLoxO3aQRdjp86zvLiCu/6\n+ffRDUOabRc90TBUyNj0JLHSyRhFwm5MaWorp8+cwdAEKQmza8ssVmYx4hZbto/z9nf/JJqhekqm\nwsQkAV3j53/uNcSdA8Ttkz/SEP4twuArwMrGzx0hxGF6OIkfC5UXRz1SrReEnDx0knplBaIAwzYw\nnSw7d++h3WpzbH6ZdtjB9xMEGhMzO4jLExxa6/YUEA0T0emQxAn6DxyeXT3H3qtvxPd9YhIKtoZm\nWkR+TGNpBdfzefXrXk8axwyMZNh+sUVYb9LtNFlammPbOy7ke9/6Flfu3UlBs/CSmLHJCYyMQ1Bv\n01pY5vztu0DXOHxgP2mYEHsu7RjarSZGItBFj5TVbrdJ44Rus9eM5QYBmVwWqVvMnamgSRNhmQiz\nxxRpJjGFfB6ZSAzTxDRs2i0PkejoQpImCWmUIGKJMCSWZaIZGrX6KtmciWkaPRX9UnmDMKYRBgFO\nLoOpSyLZIXQdrr9qinJfg8svey1JKkhTnS3Te3jljddi6g5prPXQD6RnVViUEIgeUw+kxCFFCZuv\n3P80m/vKXGkXiRyH5bqLM7gJLWpyZf8U/rF5BvNlvG6byYECjgzBKbPSPNdYTi7O4ScxOcdh1+AQ\noZ3lu9/4Km4acvUVl/H9r3+TicEB+h2L0f4isZXn0PISteeOk82WqTVqhEGAYWUYmBrCzJp88DMf\nwynliEkxZQ8FjoIkURimQckQNNxzmwf/3cby0rHBlryIHub0x0Ll6YaDVArbNtlz/l689REC36VU\nyhOqlKRTx7Eddl91EaaQhK0WKo4gjpClHO0uqMDHKRbxui7FQj8nT505Z47ueo3y8Chp6GMaWRxH\n4a42OLTvMFvP28VAX4H5o0coOFlIJI5pcmzf8/jtNjM7t6P8kLwQmGFAPfDIOA5HjxwnXyhSW12n\nP1dg6fAJUqnRaQcszS9jGyGarlBxQrnYR7fTIQ1jhEqQaa+/RKmEOJUo3WHf04ewjD5EN0UKDdM0\ne7zFKGatWsUuZMlmoJA30fSYYt4iDRStsIWtgWXqiHwGLS9ZWj6F1AOEmScJFHbGotvtohsWmXyO\nvJMjkZJsIc/6siCfJmyauJiJmctpd0LKgw5SSI4cXsEy8sRx2nO5hNYTxJMv6nK9QNhKlEIqSSgU\nDReO+h5vuWIvh44cxCoUOTp3BiPqcNPUJpqVFnqmj35lIpMmpi6ZiyP2nTx+zrpZls1IsUjWsHCU\nJAhjkiDhogsvQUQWo0OTDOYytDrrrLXO4KcmN7/yNaSexHVrLK0eoVJbYmggy7v+y/t41WtfTUSP\nIqZvlNv0PkQvSEEKlbVlhPzRumH/5gy+ECIHfAX4gFKq/dL3Nhgs/z5UnuqBMTUJYeoRxC6q3WR9\ndhbL77A+dwJHS8Cw0cKYomUj0ohSIYdAMTo8hIagW2lQsBxymQxheG40Y2R8DH99laxMMKIOK8uL\nnDh+gitvfgVnlhdZmD2FnkI5l6VdWeaxJx8iO1zEy/RE9NLEI3UE+f4iw4NjrM6vYGgmepiyadMW\nFpfWqHVbHDxxmlYY4aUp45ummZiYoH9gCNO0e2ILKkFDRxdaL78SWzRaCc/tO03YBsvXKSUWUb3T\nw11rBqZtYdoW7bbLysoyS8vzIEKiuIOKWxh6hGlECNUB0aXeXEKzQTNkjzti9JALmtTI53JoSKTU\nECgGiwNMD+/gor0NDDGEF2yhUCyiSw1ESuinpLFBmmiA9r8tn0h7hqJET0shSRJIIE4idBSRGzBW\n7KdfN7jwvF1UPMHDzTr3rS1zJl/ieSWoDA9wd6XKwdUVBkbOrcka6SvTZ+UwE0nbsFlzQ2589euR\nRo5v3Pd9Jq66BDkzzIFKAz2fZ3JgkPl9+xFRxMrSGSwNtu6c4VsP38UNt9yI73v0lOoUQkniVJCg\nECSkCr7ylW8iNPssFfqHjX/TySKEMOgZyj8qpV6gfP1YqLw/+IMPb/wkufyCHdx42W6M4T7a7Q4r\nzQa5TdOsNKrY+TK6YdJpt7CKfaw3mlgZh47rYQiN8kCRTttlzXM5cPzcJ9QTDz5Ao9kljhR+pYrf\nWac8OsL6ffcxf2aWLdu2Umkvcv8jDzE6PMSmndsY7BvsNWE5FlGrzVU33Yqna6ggZFCbYDxXZPnA\ncYxSnkoc4zZcpITxmUnO27GJ+ZMH2TxeQqSSUKUYhiCODWprFaIYQi+BVKOV6FQqLjNYUGkQGwlr\nR+eYnBkgThS6pmNksxQtmyDq3VMaLR8vBTcS+EKSmFnsgUEilZAEAVY2i+e6pImBrgk0aWPrDijZ\nK6jEwPd81hcXCTpLhKsapU1NMqbAtARpGiLQMWyJ68UooZGotIeSQGy4YT30NwrUxiNaSYH2Aqkr\na6DikOFNk3z7njuZ2b2DkXKJuJtSKhTx6lVaYcSjC2fwgy4Cg02DxXPWLQoC3FRnZHorh06c4id+\n6qc4cvAQTz37PG99wxtYWlnl2PEVbn3t65h9/hB5K09b+JyaP0B/wWFgz3Y+/LE/ItESdNWDQCUK\nEiFRQkNHEcYpUsIv/Idf5P57DxAECwh+zHIXIYQA/idwSCn1sZe89WOh8n739z7Yk7FJYX31GJHy\nqdbXcbJlTCdP2GyTKh1NE/gWWImNLBTI2VlM2+Lphx5k77ZdVOs1+rZuQ6UJbx2Y4Bf+5c/OznHJ\nzDS+Sjl0+DC7LrwBSOiGIV6UMDUwiK1ZLNQbvOKiy9DyJkurSwyXC5TKDiJMaK6tsDx/hkypwHOP\nPcP2zVOMj0/TWa1z8OlDxJ2EnGaAhGSlyoNHjjEyWGA1ibAdg1gqlCXodEPCSFFdapL6GvX5KtWV\nLoQQJl1GcZhHMOea7CGPKROEiNCQpEpg2DlSI0HTNfSo18Js6yYiYyNMA89PiDWLrOWgmz3dMqHJ\nHq8kTZFaT83eyhTID49hqIiVbpvhqVFayiDWQPYC83TazZ7CZNpD/ynVc1/gRddLyBf/d49BCQhI\nFSyvVRgeGqGyXmHz1DT3PfIQb3vlLXz37jvZfN5OTlbqlIcGWTx6hDfccB3zyys9yvNLRn58jPzg\nME8+f4g3vuEtPP7YYyzOL/DTP/k2nnzmGdbWarz6lpt56rnnWOm67B0cQsUd9FSxd/s2Lrv0AmYf\nf5Zt115Mokl00ZNo1UWCFnvIVGIKk/f97LvZ9/wC5ZHzSVsheuKxuv7D+/D/LSfLNcDPAPuEEC8A\n936bHxOVJ0XPTZAIIiUo9Q/TDRIy2RzCC1C2hZaoXmhyvU3b9ciRkCodXySkiU/sd8lZWdKOR4oi\nCs5FTnRqDY7MnWbvhRegghglUrQkJqNSFuaXKOWLbJuZxkgVXhAiRY+/bmg6QRwRuB7HDh7Bth0u\n3bOXMAl49rl9eJ2QOO4JyKWeTy6bpeMHGPk8y80uHT/ivPOm6VaX6LY92m0XrS1wYofKYgVd66Ng\nmlTSEF+TmJ5BmI3QMkWe2bfAhReNYyqFrnQ0Erw44cTcKok0GevPo9sWUhg9ZX9doqSJMDK4QU8w\nHCGJUokgxbYtdKmx++KLSInQ+oZwa6vsHM4jDYOynkEmOik9bHa+kCFJEuJUbRC9Xvw+XwATJRui\n32dFuFOFEi+AV3UqeNSrNYqGw9UXXcr8mVmyA2WePbifPRddRd/kBPuffYK1tSrFfIFa99yq47qX\nUJlb5M1vfyvPP3OY6lqFN73pTRzYt5/a2io333Qz9z/4EErBza+9ldljx6h3Am7ZtZ0xIXAf20ft\n+WNsmtkME2V0CSI2iLo+tYUzpGmR3/rwh9j3zH5iPU9qNknjCEv/MeVblVIP8cPvNv/HqDxBTxxN\naYr+vnFSoRiZ2kOztoApfJychVdvIVo9v9jUDDqNOjGKktGPZRoYBjhhRBiGCCmw7HNhRsuVNc47\nbw+dbkSn3SLv2Hhul33PH2B6eitRGrOyutoTeNB11qpNGuuHSFNYW6vQbrfpK+Q4b/suVtpVjh49\nSqk4yNJ6g1a9Qao0TE1S9WOCoEOcJKBpTA9PU39kP6OGTjYIOXRoH53VDpvHttBccak15gm0TK/0\nRZlEKPJZk4JyWFytsy0axDJBpD1obCpTxiamiIVEBT6kGolmgpS0Qp9my2NobJB20ETaFs1GjZJp\n4Me98px6pUZp//NMFQvUT89jWDoWMdV6lbg0ysj0lRtrAiqN0DSJpiniNEHQSzz2qsR71eIIQUqP\n9KVLiRIKJRQo0ITO0eVZJjMOsRsyYGbwCgIaFYQQ3HjDDax5Xa67+WZUZZWOHzI4M81LY0JOLscV\nl1/K448+gcLguuuv4zvfuR0p4c1vvo1vfPObWFaGXVs389A991Jzu0wPjjE2Ncry4hz7jx/kwrFJ\nxP/8PJf9/geIlEIGHk/f9yi33/49HnjmJJ1gCDN/Cboy8RXYfZcglQf88PDxy1fuouIeo1CBZmQR\nG7jlMyeOIbrrZzPsnUYTBExunmFteQUtUWQLeQb6+ml0XDBBcxxqqxXc7rk+56bLLiPutslnNYpD\n/VQWlsnlc9z8xtfRjgRZKfA6LbLlPLFUDHghj97/GO16ryJgfNMEadDCtEMaJ2qUs2WaLZd6p4HQ\ndXKWTqGYZ6G6SE5ohAqc1CaqLGI3dAqqwJFnnkSYCYOFcY4eXSVOdCZkBmHqVFyNpmbjpV2sjuRg\nWmUosnj4iRNcdcluCiRIAUmsoWmC2PcI0xiBINV6escJCfvufxykpDg1wPDENHa5RCACMghaVR+J\nwlAabT8g1zdELe1QLvURrjdJpUmi9y4hCsXsmfke+k4YpELxQuxGSSBVpPQMQwFCCoIk7hHZ1Isn\nzdzcLFMze6l2O+SjDc3jMGG0b4Bvfv2rGANFMlmHN77rXZw6eYbyQD+fXfjS2XW74orLufPOO+kf\nGOKWW27lC1/4JxApv/De9/KJT3yM4eERdu/ezZ3fv5u+7CCXXv8q4rk59FbAsbUqTr7M6bDDptUq\nutIghk/+xR/ynW/sZ3Z1CSkUQbBIbMbY1gyR8lFWBo3/R9uK333ba8hkbfpLBXbtmOHyyy6lr6+f\nLUNlEldH6gaWYdKMAlI/oC9fwokVZgqa1OhW6iBbGLZFPVhgfOsMDe1c3zetVvE7bdYqFdqdNjtn\nNtFYXES1qswtrTFQ7keLFetzEbqV56lnDtL2AuxcActOOW9qjMZiwtL+Qxi+R6E4RLXSxEw0DBHj\nuR5J1MFtthgvjREGHhNWP1qtQWnRpdWZxdNyrFYq1JfnUANZ2q0GTijwwhBjYAfXv/2dnPrMRyn4\nLi0Zsc1w6bb7eexMjUvHs2SdLIqUOIpQukUSS4QySYSBxO4pxYeKkq7ROlVBtWCp3eDqV11PR/pY\nmsm26WnsqWEWqlXKiaAaxEz29dG/UzI4tRsDrWeAqudK6bpOEis0BElPgRy1YQxi4yIvhSBNUgzd\nOMtikUCUpjx34ghhtY0qFWiuVdB1kEJwwe7zuff+h/mL3/wMDz75KHd/726kbrD/wAHY+uK6ffnL\n/8L09AyvfvWt/PM/f5k0hdtuu42Pf/wT5HI5zj//fO644w5GRyfYOrGZB+65gxunJ6m7HrWuYnzP\npahShm71BCptEnd13v1rv8m9D38AqzKHLtpIGWwIww9CnKCpWf43Of8fGC+bsbzh/POwHYfpLRNs\n2jmJbmWJSUi8JlIpdCVQcYoRK3zPIzVNtDAATSPwfGxbw++45CydTJJy5pnnGBo9V7DC67SYP32K\noaFhCo5Nt9uhkC/ieh6dVoeRyWkymsXK8iJHDx0gkykiwwY7t20jjWp4ccDCeo3ztu1As3Pcfd8D\nuDHkSmW0NCH0WujdkIutcaKjFY4fOQbjU2zuH+HB5WOsNToY41PM5TJU/BCZQktTGKaGEbZRhiQJ\nBF1bUOwmTOtF5uIuO4g4td7icCbPJC6WZqClvUt6DD2RbWmQoBGYJo00pahZyBBEu8WoZfPovY8w\nMDXMFa+4nkaYkB8eora4Qqi18bsV5g6FNFou623Jzqu30uuMFjQaDVSqEBtCJ0LRm1HXiaLeyf3C\nCQKcJRQr1fMSpCbopBEXX3QxCo3czG6qrSpxFKK5ETMjQ5w4coirr7yaJ3mS6nqDXbv28lf1T59d\nt1JxgDe98S188YtfIEkSXvWqV/HVr34V27Z5wxvewBe/+EWmpzcxMjLJ3XffT3a0jFUcZSlTYuyi\na0iyOXI7dlDeeym6brFyapaBi6f5nd97Px/4j/+Jrt9Eymwvj6RlCFVALgUhfrS6y8tmLK7qcMMr\nbiCj6WSUTr3pk6qY4vAQXuCBLtBiRTlf4qkjxzCkRdeLCFRAGsekfoAXBSy6XebmZynmijgj51Yd\nrwYJo3suxjAlKklRcYwbuwxu2Uph+y6U0njqocc4PT+HkBYLp05xwZ4d+FGLq666gq7bZmr3btAt\n/ubjn0W3TLaft5UTR46RDWM8v80uZ4KlB/aRrFaYsC2qyzVONFt0gohte3ZTc310vcBY4hMbJt5w\nP2mcoq0pmisnefAfPsJ46hGkCYktWIwE2W6b4Y7Oieo6IskyXIKMrhCpIJE6qaGRCIWB4M7HnqBg\n9ORtZ42YrKuzGAe4mqR5eokoehjbgltvuJkt5+9GVFfJGH3ots5616NUGjwb8UpVyv4DzyGERip6\njhlaTyY1SZKzXMYXDEVskInPptlkL5wcJgnDE2PMP3mIRjlHUbPxlGS5WmFkZIwTJ08xV6uxa8du\nBoZdWq1zWyve+rY38+V/+RKmpXPlNVdy+/e+i5XNccWVl/H3X/hHpiZnGJmY5MF7H+SCyy9kZMs2\nnnryEB09oq88ybBjsF5b43CpxsXqKnJ9RUQKm2cmCYMOlrLxdYUgILJNRLSMJCBNzqUw/OB42Yzl\n6tveSloo0ZUxzTTAkylJGCLioEf29V3CZpviZJ6tl1+EUDCzZYooSgiiiKDdwk4lYRCx98KLQUoM\n24YHX5xjfGqK4kCRp+5/kK1j4wRJSD6boVtv0mw0ObT/OIEfUDQzJEJjxWsTug2cPpNH7rmHTCZH\nrdll//4j2Nks+WKBw0eO0O52cVId69AiK26VjJahvHUrx9dXmNq9i5MPPUjGj6ntPwqmIMxKMsLA\n8CPK9QZ5I8UPY/IqQaUxkVBo6PiuRzFfQCNk3CywEAQstnViA/o1Sd6wUVYva55KgTIMDp44znmy\nnwEnz8FOh7oOkdR7PT5Zi+Pzs2zt7+fX/uOv8Wef+jCNQpPc+AxF02Riu4VyJhCGAULgXQRUAAAg\nAElEQVQhleDE8VMEYUgqjV7kayNj/9LTBF4SRj77ugJErywG8JOI4WyRuW6bJE2Rpka5VGbf8SNc\n8Irr6Rsf5dSJU5w6fRohxTmZuW984xvk83luvPFGvvq1rzExMcmOnbv4xjdvZ9OmXWzZspW77rqH\nq258BUoTfO3r3+HCi6/nuquuYSBv871v/wsPfvsBnnziXn765tv43Pe+RpAm3P6tu0CmpJFLTISU\noKUaegJoCsT/o/0s/Zl+VCLIWRm6fhvH0XEGSiStDpptE1kpWUcRLS2hBKg4IbAsoo6Hnskxd3qO\nbLlM3+AgURACitnTcz8wS0xtaYm8JhHdDtm8ReS6LC+tcvLELGEIaap6eQiVksYhWzZtIgo7DA2N\nsrpS5cTRM6SJIJMr0nEDpG6BDPC8iL3mMKNjExyenePpM7OYU2Pc9+yz5KOIvK7RXl9jsK+fLRXF\nnGyjDfcRazFnoogop2EJHdEK0USv/spAMLZ5C6snD6N31kgaIZ3RQezAJiIltiJM3cTSTGJl8vTJ\nUwg0Ei3CcGPKymBVBAw5/az7LplSgUhJup0OohPyp+/9IO/+8Hvptmp0/TbCsilO5sgUxkiVQhOC\nzVs29+q+4JxT5IXxQvj4hfdfMJaNQwZJL3y8uLbEAJBxHLp+F6/TJpfJMjYxzqc+9Sl+4f2/guM4\n3HDD9URxzMcXPn52jvPPP5+JiQm+853vcMHe85menuEfP/95zr/gfMZGxrnzrnu5/vobqfstnnrs\nEG972zvpHxrgmUfv5rFHv0feCCiaBn/3t19m84UX0ai3kLbNx/78sySYJBL8SGJt5IZUEpEInX+l\n3vec8bIZi2nTo9ligg+ZUoaW2yQrBYmWIhMNO5fBX21hWTqRSEhjRbu6wlK9xd4rr0PP5vFbLXTL\nYG5uluGBoXPm8GsBucESrUhRNiyatSbHj5ykUfdQiYafeiRpQuAH6IZNrjzAvQ88jGXptLsdhLTx\nQoVu2iwur6GAgBiZQCMIaJuC4nCJsq4olRz6d+7kxHdX6LdLNNwWxvgwZ1pNbByyGPg1D83OccJt\nM6PnGBnqpxYvkgY9RomhS5YqNdZa62wmy57EYFkXeEGEZuo0Ep9MbCC1HpPmwecPYWPT11/G8GzG\nug4Hkxae7yMsk2i9hSU0QgFOqcApz+P2L32bbVecRzkLW7aO9Orw5EbEC0VtrY4mNcK0FyLmJYYB\nLxa+KICNkpcehqz3t6lSiDjhoeefZVd2mCNL8xSKOWLPI0kUQpe43Q6f+dSn+aVfeh9Hjx7D9Vx4\niYrVmdOnOXHiOG94/esIuj7f+PrXufWW1+D0Ffja17/GtddeiVIxB585ym2vv400aPJ3n/4M0mvS\nlzfouoJuO+VPPvrH3HrzTXz2U//Ipa+6gZk9l3D86PNEfoY4ikmSDkJIIi0iTiRCnduW/oPjZTOW\nzso6MmNxqrKOYdu0ltcI05BWHBDGIVI36NSqqDSl3W5hOg7V1TVsq8Tk5vOIzQJekFKttsiV82y+\n+go03YDvvTiHuamPXLHMFYO30G13Ofj9u7GLg1hpl067i4jBEDHlgRLn7djKyPAAs8urNJpNlhaW\nCaKQLBqV6jpWXicrM/hJzFgs6X92icGbXsl8u4UIQkqjI3gyYaJ/gFHPozBYpBb6YDosdlvMdjxm\nxjdj1tYZ0E3qzQ5Dhs2IIVkMDeoiwuoG+IsLzGSH0dseoypGWws4XhIkfUWyiU6o+TQSxVojpmQ5\nBH6HOBJ0tYRRHB6yFGOyhxW1NJtMIcdSfY1WVjE8Msahp9fITEyx883Xofc5DBRmaKsMSgk0afDI\no48jDAMVRGfvKcCGc9Vzx1LFOZGjF5j3aboBiJWC1bVl3vnaG5ECzGyWvNTwfJ9W5FLMZTlVW+X+\n++9natMkJOdu0r3nX8SmLVtYW1vl9tu/w+XXXENpaJDPf/7zXHrZVWRzg3z3zge49bW38vA932Jt\n9gjn75zBTfOcnF8mUL2wtqlgpVpjbOsWHn7oPv7gD9/HJ/7yb7j3+yfQpIGuN0lDC61wOYkRkU1D\nYP6H7tmXzVjyWydxhvoYVgK8CBwb5XcJwt5FX+oGke+jN0PWlpaorFe4/OorKGXzSCVxwzpz83NM\nTE9TtG3ckyfodIJz5ijEGp2FVfY/vZ92vYkbhpi6gaYEUimSyGVidJhivkAQuRw5fZy5+RWipKdA\nn6gUP4golUoYmoFSKX1OhlOPPM3oyCDPHtpHenoJGYWo8VH2PbOMXK5wMugSI5gamSQjLAI0hkbG\nCVSCHidMWBnODDmcWlthUyaHiEOk8pG6RRrGZIayVJpNxhwTvV0nUyqT5vuZ90KcAFSYcHxxGV0a\n2KZNwUuptJbZddWruX5slMWH78R2dPTUoK+/j/lGhTSMMAUUL7iQex54lvF8ltyU5NJXTiIy5Z4p\npCm2ZeHHCUopNK2ndpJu3E/OumAvNRQ4x1WTGyHoju9RGuonPhCQtnRavktoxsRAs+mSej5PPXg/\na2s7MR37HMGKO+78NjNbtuD7ITfdcguGZfO5L3yByy+/mh1bt/HPX/sK115zAV//+4+h63D1tVdy\n5Ohh1upNlNSIlYIowtI1nnrySW59/a34UUqjVuWjf/BbvPtnf4XnnjpGID2ENkIkGhT1MrbxrzZI\nvuSzvUzDSFKSk4t0DhxnfWGe+qmTpK0u6fI6mU6IUe9g1js01ytUqqvsOX8PhVwR2zZZWJ6n22py\n4d492IaO22qihT5TwyPnzJF6MYeePUi1UsMLYqS08cOeQmQY9bSINcui1W0yP7/AieNzdLsurhsQ\nxjFBGBKGIUEY4DabdFt1xOIK5zUjMqdXuLpvlCujLOPdiB2JzkTF5YJI5xX6EDuFzeryPEerc3hS\nMrB5hv7+AdyogxA9QrELJJrOthvfROmSm0E6lKWBpduUJiaYCwIGurApEAxFJjKUrHcTIqWRJClx\nklKWGiOWYHhijNmDz3PTpm2sr62yOrdALARaKY8bxoz3D+HWW+w/fpxyaZT5fWcoO33EcLaOLE4S\n6tUa0PPA0rTHs1f/yiVfKdW7s2y8/gLuQxO9gstQKrLFAlsnpinlHRLDQCofoRoUR0y0rCCwI3wS\nWs1zitj57d/6ILe85lauv+EGWu0W9z/wANs2b+Hqay7ny1/7IuuVeb7xlX9gqN/Btk0eevIplpoe\noTDwk4Q4jkiJCOIIUzf40O/+dy4+/zwa9QZSSv7mbz/KyACYWoLlgJZWCJrHefubL/6Re/blU3cx\nBc31NiXTwU57RWTh2jqmJWl266BJ1haWyBf72bl3D8uLy/QPDDJ3aoHJiRniMGXpzArK1Ak8D6/T\nIlk5NwT51FNPsbi8jOuG+GGCbjskKqZYzDE8MInfDmj4EQJBmIheB2DayxVk81lMx8QMFbqhI4Mu\n4+N9hI8e5qJEJ2q3SZKI7MAQ81oXlZWksYflhdhmyGTWIOfFxD58W0YsrCyjuSHl/n7W/RZZ12LT\nzl1Ujh9nYuJCzMGtnHz8QfpNjdZKhexQmTqwnNGYr1RYcV1Gdl+KF6wgg4gEjSiNIIkY1S1WbZvN\nK2s89z8+zohI6eg6zWaLh59+Dt3JEIcxieexeWqctVPHOJFxuKh1KYZVIBEmkCKkTtdtb5wgL/ki\nN8jQKk3PultncytpghTnPpGFUiSa4P5HH2I6W6Ytm/QXs4wWLS7bMUGz5bOwfQf7FxbYv9hC/UDX\n3uOPPk/Hd5mbP43XbjIwMMyb3/FWPvHJjzI7ewhT6DiayVI7IkwVqaaTJOB7XQxDAr3cj8BESslA\nocTPv/Od3H7H7Tz8yONce/01fPLvP8VtP/GLqNjGUQkq6nLTTZfCB3/4nn3ZjOWpR5/Ea3fR4xSl\naeiORRomSCGxC1mSIGR88wxkcrTbHbqdmEwmYnjzdoLAJTPch+MYBM0O/UMjZMam8JMYXsR80K1V\niVpdlJL0FQvouoHvu2Q08Bo1hNSIo4Qkigldj26rie3Y2KZOFHpowqSgNNI4IJ+36DTWGBodINMs\nUfFP0aq1obsO602SiWESXSLLDrbQMAoZ3G5AaKRMKZ+C0DnTruPmbIz8MIMTgxw5dIB+IVh85m6G\np5bIOQ5x2KKqAmhDt5Cl1e4w0t/PcEYnJyrUtQTXVWhKUkrAFhK3G9JfGmdVVCmmPttEhqNxwqCm\n2J9o7Nl6IWdO7sdOUtxOk8H8KGGnRWd1lbDZwnLGQEmkiNB1fcMYQAjZg6SycYqoXj1YL1qmEKQI\nLUElEoHshZpViiBFpfBPd36NP/3V32S9u0ZBeLzq0m2kUQclEmYMxaAxxAUjIyy2Wjzykr3xxuuv\n4x+/9W3abZftO3eTy5r87m9/gHanjo6GkuCphCDS0DRJ4nqgUiyj148jhIVpmpi27OWmkpS8k+N9\n7/lF/uhP/j/+8q8+xcDICP//H3+I//qrf4QloVAusN750Rf8l80N233xFVx50y3sueZqLrniUnbN\nbGXrjs0U+/JYQnDJJRdjA2q9TrNaZXp6kqzjEDYa6ClErS5po03J1KHdwKusIGqr58zhNltYuiRN\nAzRLkCYxhpAEnQBCSRwloCSaZiA1EzSDVEj8RCFjejiKxCWIOnjE7LJGqT52gkaljhn4ELSxNZBJ\ngplKckFKth2Rtnxa7Q5CNzB0B1/XKSWSEdNifX0VAo/5o8fJ6DqOYWEvPUf77s9SjDqkusIVgsUE\nzK3noQaGWO0EtFaaLO4/wYCfoCKfGAWGoKkCPFOnVa8S6QaplOR0nZyE/MAgtkxpLM0xNTzAQCaL\nqKyTcQMG7DJf+9L30BKNVEs2tKcVtXp941K/0b+y8dAXirN5lRd4jkopVAJSGL1NKuVGj4uGEpI0\nn+W3P/s/eGZhH6+8eIR8GKDFBv2mzVA2y47hYa6ctInTc9uK/XqDd73jLUyP9nHw4LN84Qufo9Ht\nEqMTC50gAaSFrukkSQ+fbhj6RrObTi6XxbZMVJyQRiEqiRCpolmt88d/9BG8VpfnnjjOFZfs4j+8\n5zY0mefNt72Ru+6450fu2ZftZNHnVzByWTRHp9lsUDYzLM/OUXc7XHbhZayvrrG2ukbf0DATExME\nQUASx6AUgRtj6XlWaxUmRkbIlct4nQ7t1rm+73oMYRKCEiwtV9FUimHauG7QQ1GLBNJe3ZNKU4Sm\nE8QpaRzjph6Gk4U4YGa4xMjIMO3jbcJ8HwdOrOBYiiV8wKVhJZhxh8N2ipfGzCQaHaUwlIHdjYjt\nkFKxyPz8GaazGVaXKgwXc1jFLPm+Io2FZZxYRwkbjJiB4SJxqpOQEGgKwoBJKbFDhbe0gixnUWmC\nSEGbGOeRlSZ9eoQdx4xYOazIJ4fA73gMGwKnVYXAJJvJoesJvooIDZ2FVoKRLxKTAL0L/ejoGI3W\nLH4Qno0cv+ByvVDaIui5aYZu4mQMVKqjiIijnkB6ioQkJVYQonhydoET3ZjLRsuUkxghIOr6mHZK\nWFfkxblI7be9/+d5/dvfxCNP3E8UJCiVoKDXyaiZIDQiJHEYoOuCJIxJ0xjLsnA2NMiiOESmMVKB\nJiQqSUBITh0/RZomjG2+kL/8zKf5r7/yazz50B288Y1v4k/+/E9/9J79v7j//11DK9kkUuCGXRxN\n59j+QyjHIGs5LC0t4/o+ZjZLy+2yuLKK0CSaLqnWqrQ7HcanN2E6GU4320xoGdpuzPP7Tpwzhypn\nkZFDhgxGyoY8kInpxL1+mtBlfXWZKIyxnZ5sjjSNXlGgaSIdjen+MYaA0/uOsm3v1Tz87bu4RDPp\nHxnnittuxjuzRmXuDP5AH/OaoHbgGH63QWTk6TQalDIlsonOQrtKuZgjiWO6WkygEprtFtJrQ+Rh\nCBtBl5xyaHRcgjim2q6hpQljEjKxj5UqKlpCpDIYqUYgFCPbdvPI0qMMbd+Kd/AEUZKiRRHFnMHC\nyiq6obBSRZgmhKQkOiyGbcZGt5O48wRh1CuKRFKrrnHk8GGiVN8oZXkxXCw1yejoKELA4uLyxlGT\n9sLFSYTUFGojwCyUREsVEkmqUgI9z2986kt88j//PDNOigSkluJ1XZTu8/YLd/A7L7kj2UWHx++5\nF8sU+LGHNEzCVENJA032BAhV0uOIpqqnG21ZGYrFPPVGA00KhEiR9BKtIlW9xLZSjE5OILUMC6cP\n0+4Uueeeh/nIn36IM0cP818+8Gv8wxe++UP37MtmLI8dO8J5m7djxxrVWo1dl1yCR0wsBLqVIQkS\nGtV1mu1l9l5/FdFag3ptiUJmlEwmD1GK3/HIl8ucPHyUhflFVM6Al5zoy+ESQ8YwMkhQCeiaIokD\nUhSpruH7CVHikyYJvpegaRqel2CZOk5GxzEkfaUS+5/dR9wR3PtPt1OuBAwMlTFLDm2/yfqhE6x5\n6wRhg+nNm/A7Pv2dEUJd4U9Mc/+z+6hYFlmRIOKYlgq4YGyK1cY6jRQy3YScEqgkQkMjY+jU3BZG\nvoQeaYi0QyI0qmGMrTk0pUZs2iQiJFGKJ++4i1uufgV3Pfow1+w5H+3EMom0MP4Xc+8dZdlVnfv+\n1tp5n1w5dk6SWt0tlHMCkSyRg4yzwcA1F4zt966vMzb4YjDYBgw2XGwTLtEmCWGQEJIIyp1bnbur\nunKdqjr5nJ33en+cakkl23q+5g95jlFjVO1TVeuMs9fcc805v/l9xa00OI6RKmoqwo8Sdm/exsFH\nf0Lv0Ci12hJbjBLNuQqZkXEm5iP2//Bx0kgQ4wMmnI8qKAxTZ6W6jK7raIZOHCuEloKeIpRCSh0Q\nSCEQJCihSEnQpIA0xjNd3v2hz7Bx0OIXXvczlNKUYaVIHZdGfQmeQfkmXAERVIOQRKQkqQTDRdJ9\nkKkkAZUghMTSTSxdQ5eC5ZUFspkMUeBhPjXNqTAMGz+JWL95I9lsFjfj8uThU5SXF/lc9VO86y1v\npen5OG2e0543Z7nsij2ce/QQI/kSeUNx5sDjeL5HfmgAw7Do1JqYhk5OT5C+h7+yRFZFuLkelssr\ndIIIkQpOnDhMLfKwcjkW22vZ2OcXGoicg+k4aIZCodHWYo6XJ/FJcNpgpAF6qiHiGCUikjjCjG2a\nQRtVNzm32MRODAYWq1xkWgwowRMTh9m8lKNgJkTHThPKkEoUoja0mD53Bt3MoTs6Qlhcvfki7mvU\nmTNs8CJKGZOFmTI1SzGTlaQtjzFpYEkNI46RKiRXLNFMwBYRqZYwFXTIGSbj+V7GR4rMB22CVBAm\nCVmh4S0ssjnXw6mZScbDDmzYxo4/+jBf+OUbWV8oMVbsgVgyW6nguAV67CJOKYe7ssLkE/vR8xto\ni16m5xtEadplmRSrWK8kWS0jd/UZn4kFOw/nT9PzEaZbPj7fjznf0ESlJEpSt/Lsr6Qc+9TdSK/G\nQMGht1jgZ669cY2zVCoNpF2krXR0zQbdRkgDIXW6xBMgVYzrGGQzNp1mg9D3cU0Diy72U8UhQulo\nmk4q4YIdF2E5NkEQcOjgITwvQNUC9k+dwX33OxjcfDmf+fx3n3PPPm/Osjx5msHRfsJmh0alSaZQ\nYN2WLVTbTVSaUBjoI4pDND0lrlVxChnay23Ozi4yv7hIJ4hZmFuiFXoEQtBJEpppHfJPrxEFMR0j\n5lxzESljhFJ4WsRKp0Jiacw3PLaNjdGn9xA1PQxHxw8DDKlRLORJwgChYl5w0U5m/uobZNY7FPtL\nnKu0mW+FFIOYQiaDyOWIlytkswVOhx6asgiCOiLKMFdtss42SQpbCDdIVmbOoLsGvcMDzNXKDKzf\nwMLMJDk3Rxp55KMEx7JJIwOts0Jk5vFMndG+EnF9GWUUyUmLVuSRailSKlrNKkngY5QEHSkRdZ9d\nmYhLRzbjhy3i2EP5Cd6Kh6PDzNRpNhcvxg9ivvTZz/HzL349YZpQqS8iZIqBQ0KymuB3I0t3pkU+\n1V8BiZSgklVHSdWqvHq6SifWLTFrWjfZl6mH0BUygTgGLd/HZJzSaGu03LX9sUwmQxhrSMtF6HaX\n6T5SoARKpmhaV/dF11Ma9QqSiIxrkPgJYbuJYUgcyyHwIzTLYv227RiOTXlhgdnZecLAIwFyIuWW\nm27iwL7HuPa6K/naZ//uOffs81YNazYSHn/8cQ5NTHCqUePYygpPzE0zFfqcWWpSERadviLahq14\n+RIiXyKyu+U9xy6x0O7Q7LFYMTucCVdIcia5Ut+aNRSCoF5DUwFBu0WatJlfnGJpdprdWzazXK8x\n36xwYPEUM1GFpojpxAGapuEHIYVclqt3X4BtJJheRN1POasH3DK8kUVT8cChU8y1G4Reh7xhgh6S\nGeilrAIiKQg6HpplUEwSWkcPs1JbYGRgnNJgP4lKiaoNtFRgjJRQhsMVH/0Hyk4f9nyHRHjkB/uQ\nqeSGa17IbL1OEsXoaGSdHBk9i+NDIhR+zsTRJTKNqeuSnlaNb73hdfSW52k2l1FBgpGmxH6L0Otg\nFfPE2TzmwCDLcUysuqSHrUb7qc9NSYnQJEp2CSo0TUNKgVIJECPE081JXdeRsqtaJoToEpGILnXs\neTCmFAZpFJOIhFgmREGMFiviMMJP104oZq08QZRgCA0THaUMTCFxdIGja7iahp7E+K0WIklJg5jQ\n80nSEMPWyWYcOqGP3TfABXsuIQVOP3mcc1PTtIIOXhhgI9Gl5FWvfAWPPXGQk2fOYYr/opOSRCGX\nXnYFbqmE1wkxUvBVyMDWHdROTZAkEkuBV24Tpg2OnjxF6odUvTbLi2VqIqLlgMjbtIMa5XYZLQKe\nwapjKB3LskijiMD3aUcR2y6+kOYJnzOTJ7ju+l2cOzuJlCZe6tFsLpPRDDpejeFcDzt3beDw2TOo\neZ/BsWFqtkHp4l1s0CxuGCvw7TMTHKl0GI4Mcl6KMVdmPNdHZ2UWNwGZpNiFIst+m2wY4wvR5UhT\nEXZfAcc28UJFbJTwCqMs5Ddx1S/cyT2f/gThUpVNQxfwqD9LfW4G4eaptyS9uSwIHdP0yWkmyhRM\nLq1QMIoMmTrKn0eZOoZuktUEutDA0EhjhZ3NUBCCXLaP6olTbO1ErBvdSOLHREpHSoM0ASWfpoGT\nq4l+F1n8zE6lQohunnL+yNU9oqVPIZDPmwRUukp+oeiivFdfu+mGG+gbGIBntDjGB0ZwnDYr1Qap\n5iKFxLAsRNKGtNMlWiRG0ySB52Np3VzJcW00TaPpe/SOjLFu8w6mpqaYOneuSwKvUqRK6bctlITZ\n+UnWbxrhFa98OR/+q08QxM/NSPm8RZZN60eRQGepSlyt0pifJ6zViMplOp0Gse4TNFqsLJQ5c/w0\n1ZUG5eUq5cYSy0mTildncaXMxOwigRdS8+p0Ym/NGjJMadfb1KoNaq025WqdRw/sQ3MMXv7qO6g3\nyxiWTprGaLritluuxdRStl24gc3r+7BbLSzbIHtgnm0XbiMWClEqcs9DP0EdOcaIVOTWDbOUpqiR\nIr5pUdq6kWlbZz6OWI4DltIOYRjhxpCGMc5gDypNSBGYuoGMOwy1mtTLxxg9e4Qnv/wNhpMUW5M0\ntYQ+x6B97jRJo8PIxotoJ4K2SvDTiJYKkZok9RU1kbJxdBsq0RAKoiTCQYISBElEPWqSttok1SYZ\nJbACH2kqzPoKy+cmIFWs1KpoUqDrXUzYU3St6ukJwvPlY+gKzj4zeliW1VVse8asS5chRq1+3/1C\ndQ92hmly2eVX4Jj2mvv2tre8BV1FBKEHEkxToesRKq5D0kKXAWHkE4Y+SiRIHfL5LIqEVsujNDSK\n7mTZu28vp06e7HIKkCLjhF7bZePoKLVmlTf+3M/ypa99jU1bNzE1MfX/N1X8/EWWRn0FTdpUai2I\nE/p6SywvLzM/O4sczJOWbKbPzHP46GHKS0tdeHwQMFevstCsIG2Lnr5+qtUmXqvefRrG0RpAnpGY\niCgiSgVC6iSBYqlapqRnuOfuuyiXO/T39OIlbbZfuJGl+Um2DvWyWJnj4fkVRn1FXMhz1eICsx2f\nuYUptMQnKle4IVNkKIwxdwxzeGqReKbMRNFkZ884auAUuVaAE/g0ChZKNxhMHRb0CGughLNSpVpd\nwTBNOlpCfwi2DFl43++iy5DQb5OxTbSmh0h1ehBY+RwLZ05gD/ajFx1ioaCQobFc4YKBEc5Vlzm5\nPEV/bw+p76OloEUK3VSkUiKlJBsr9DDEbzZxNZsyDYbaHvvf/0Fu/cQ/0qzXkbpBnHaZXJRQpMna\nSCFWsV9qdcOfjypJ0iWt61K8rp2HkfKZE5VPY8k8z6NvYJCJo6fgGcQ8X/vnr7JlyyZ6B3o5eOA4\ndn8vkVfBkilSd/E8jyhOSYkxTZNSMU+r3ULFCabrUKt3qDXKJIFHtAoKBUVGE2QLLvsmnqQRpLz2\nNW/kxz/+Ed/+1g+wdIeE5x7+et4iS9i/gZmVOmPbN7LtpmtJexw2vfhKRq+9GHs8R2IE2Otttt94\nMaUdg/zG+/8Hw1dt4JZX3sSwW0JrKxYW5lkul6kvVAnbMUm8Nv7PR3VO1edoRnWEimi124yMjiJS\ni8ceOk7cgqXFKko3OHLmBEZ/LwfPniURklaYwMh6OuUWvct1MsemuLEu2dSO6NdN9JaHLRPq9x9g\n3a2X0lcw2SwEs3ffxXB5ieV2jam2j7cSoefzNM2UnK7TjFKCnhLS1LGNLMU6+IYijVKCdodCs82C\nVJTSDLopIQjJGwaqVmHQdClmC3RaAb4XkqYagWNxqjbHFVqRvukFwqyNGaUEIiYkZEqmXcFTkRKr\nhJZhcDby2Bc1qC54HE3r1JZnOHXwEMVMpqsTuYqtkkg0IZEIdK3Ld4xST40hC7oOEMfxqmMkWHaX\n7lWpBCGeURGjm/Rrmo4QGkppBFHM/MIy6lm5wu23v4zrLr8C6fmMFB20Tg3bMIjQqNRrhElMnCY4\nUpC1DKq1Bk0/ItAcqo2Q5aVlOu0WQXz+fabkBVy4bQszK8tEiUatsYKbtbnjjrutv1IAACAASURB\nVJfxmc/+M5Ho4PNfVFNShnWmZ87Ql9V44Ft3EfRnSSZMau0OK+UV+gYHSKVg7tgimqXx3fu+yfBY\ngbNnZ9iwfQjvyTO0Kw0ybpGVZhtNExCtfTI0y3O0OjGBHuKYGrV2Ha/h8/f/+Gle87rX4fl1itkC\no2ODTE4c5+oX3cR8vcGBM8fYfuF2HrrvJ1yu97JpZJyF0xN4vQWMUj/NdRtIF5cYLmaxGlUm952g\nIEy8hSWKyqSCxqJQVHtzHGxWKMzUySloC8m5+UV2b9nJYrNF6GRZIkRGijEgNSRJqtGQgmYaEaYh\nKyLEDtr4GZNCf4l2vUraqqK7Du24O3NiJDoTfpXLxjajRoaIK1WcwOK0CtE1i/l6m1Kc0NM/hGvZ\nbNl+ISfmzrCh3yXMBxTy6xjbsJF7vjaDlBJDF8SRQj6Vo8g1MPyn4fqrDUupoT+D0EJKSJ86ua2d\ntjwPwjRNiedF+H6KZWa602yrds8932FxcZGbbr6Br/3zN1lYWsTNurT97ghGHMdYloFu6ni+TyeI\niZTED7pc0XHcpYtSohsNpFKMjo1y7MwZEl1ybmKGu77wj9z2ytt54Id7UfYo0hpExR4w9+/v2f/c\nVv/prVOvk82VeGzfQRbbFU7NTfH1r3+Hx3+yHyUcvvylbzI3tcyxk+c4cuwUn//SP9HsRIxv2ELP\n5kFue82LiTWNZr2GmbVIVEDGXUuy97LX3khMQL3ZZKVWZ3hsgOGxfk6cPcaFF2/nL/7yj/CDGknq\nMzg6yh996M+4b//DGMUs04vz9KwbZSjQSGptjFKOpJChNb/EfKNCrVKnvVCh30sJZpaRm8c5o4cc\n6U15wvI5otqExQybL9xJqb+fbUOjXL5uI3vGRzFDn2L/KA0/oGfrFvIj4ygEMXE3x9B1VsyUnq0b\n6Vm3nnxvH8PrNuAMDaLCNoVmCyNO8aQkRSMgZS4NkSs1wqNnqNgW4S03Y+zejR1KjKxDT/8IPoJq\nrcrUqeN0FsucWphi8thp1m/aBoZOmCQM9PSRtVx0rYsPOx8ZhFCrFTH5dM6yihYWoiuTcb4C1s1d\nnj56ydVjoHyqqtYtK5umBUrj2c9sy3S54fpbmDgzzVVXXkcuV6TdCpFCR9c1DMNAKYEfJbS9kFh1\nSQDPjxQ89f7SbrvItR0Wq1UaYcThJ5/k2LFTjA8O8vhjj3Lri16EF9fRTQPhr815n23Pm7OcOnKK\nkycmmV+sMVWts1DrkLP7mDm1yP3ff4ixsc08/tghmp02hulCmuN733mMQ4eP85mv/RP7Tp+hmiga\n9TqOazE00sdv/ua71qzxM3fewk0vuxJEhCkDZsuzSEvw4EP3kxJy6vQRNm9Zz/LyIo1mleWlKkGz\nxcTjhzg3OcHs0gLZWCLSlNl2QG79KMFymb7eAhN6ypFohdOuzzFH8UjcIcwXGOkZ4UV7ruEFG7Zx\n8aYLGOjtw2u1kU2PeH6JEdtiaWWO9Ru30NfTy7nJaYI4RSqJpsASBrpmEkmN+3/yY+aXlvDaPnMz\ns0zMTFPKZdjQ00PqB7TjGNvMEGsaqphlryyTyhg7kOy6/uXsP3MS0+uQBm0ay2WiRp1sGJELA4Zc\ngzBuMWiYLBw+hheHNNtNoiDA0W1yuSz5Qu6pxPy8pWnapUpavfTMmfzzPxuG8VSC/2w7f80wDHRd\nwzJtHGctNmxxocmpk9PcfPNLmJ6e4o7bX4Fpmqt/L58qGjTbHlEKUdLt8yCezoskYGg6jmlhaAZS\nN/n+Aw+imw77Dx/GwKFU6CdNPd72c7fyF3/2Vr71pb/iuew5nUUIYQshHhVCHBBCHBVC/K/V6z1C\niHuFECeFEPcIIYrP+Jv/KYQ4JYQ4LoS47d/732GnTm/OIpd1GSz24i+uUKvXiE1JpVal2ajT21Mk\nVh0yWZ18r4sXNnn4h48ykBlBBIKk08IxLXRd8UtvvgNrZO08ywOP3ssvvvnnGNsyTHYox52/9hIy\njsXvv/13uPrqPWiOZM+uy7l0z05UDAvTZXRpoDsmBbfES6+/Dncow+FalZWkw8LKMtN+i5NFjX02\naNk+uHQHN227mJ3jm9l9w3VoIqbR7LBSq7J88iSV8jJzzTrznTae30IszjEuBNH0FFarwXrTpB22\nMA0bqWxqliDnw3LqkzXyiFix0m6TC0Li6VPozRUs18Z2DLw4xin2YUqLtOMz2U5oj+SIRcS3PvDb\nZCIPu2gxIE0MqWFo3bwhSGKEzFPXYEuhn2BxmqxwcLQUKW3QEkQUoicBgyWXQlYnSbszLVLTkCLF\nkKDRzWvS1alSqWurnMkRhtlN4p+Z1D9zeCyOUzRNZ7A/x6mza9UP7njlS2l7de69/7vcdOstRFHI\ndddeiyQliD1CFeIlwaqIhIYmJLrsEgWej3yGrmFqilJfLz6CK6++lbu+dhe6itncm6fiL3H68BE+\n+K7fJjtf5tg//xPf+8Tf/uedRSnlAzcrpfYAu4CbhRDX8VNK5AE0lms0luv4Kw1EJ2KsZwhHWGTs\nPELZnDk9RaFQor9nkKNHTjF9dp6ps/Po0qXdTNm/7yjZQg+hCbVmi6Mnz5HN5desYRo9/MMXPs+m\nPRvo3Vjk0ceOsbhSY++hfWzevImlhRrHTx1j9+6LedUdt1PM91BdarJ75x7KExNMTpzGSTX6du3g\nlJnQrHUY1HO84MKdXHrjNYxv2MBIoZd6q8qRfY8h44jq/BILE5PE9TrxShWz7ZNx84SahjAMZApW\no4E5O8OwoWPrJrkgoZkmqD1XMTPWh6trbE51RBog44hpKkRegxcoh8rCCnNLZUgjHA2WF2bQBaRC\nYDkWc9U6QQB2OyDw6tS9FpECpWsopQikotKo0w5CCm4P5baP8H00r44pJXGiKPTkGRofQ5oWqRAY\nlo0U4Ng2ru2QdTMUcjk0BIaQXWQvktAPIOnmJJZlr0YPfQ1Mpjsn0xVF6nQ6hEFAPr/2vh09epTr\nr7+eKEw5eOA4e3ZfSRxqvP2t78TRXSIvQaI/daRb3aurxOQGjuOybuN6Pv7JjxGrhI99/ONcdtUl\nWOYAH//QZ2nVJ1lX7Gf50QP0N2PcSgen0kY01hLL/185y+qbON+pMekq21TpSuR9ZvX6Z4BXrn7/\nlESeUmoSOC+R968sTFK8MCA2dWbCDpPLS5wrl1mq1DEMHdPQufnG63j9q2/HdTSkVPTku/otSRJ1\nP2gvxLB0ejJ5sjJLwRpeu4gPv/fWd9M3UODgk4dp1D1kbNKJYkhsRgeHME1JpxJw+cWX8b4/+mO+\n8vkvcuFFW3nhNddyycUXk0xW0HSbvhfs4WC9zEyjSnlqmnyuQKvRoHl2nsrZcyRnp3GaIXYiybgm\nlmngahIZROScLJgGCRIvSlGWg6EpvDimEoZkE0kto9P3q2/lllf9AipVuCQUDBNPg0A3qQtFhYTY\nsJFSRzY7pI0qMgrQUCgpSYOQUwsz1B0NbNAsiZKKjoRWGGGaJppr4WRdNu28gP6RcfKbNrHjwh1U\nZycIooBcoURjeZlCtgToRLHC90MKbhaZdimpHMtCk7JbVk4VhqZjGToiVehSe6pS1oXzP11mfnoe\nRqDrBq6boVKtMD+/Nqk++uST7Nu7l6uuvpIg7HD8xBF6+/LsfeIx3vHf/jsZO4NK1WplrYtPy2az\nlHp6GB8fZ9v2rdz5hju59vqbec97P8i3v/0gU1M1NmwcJfRr9OT7eeBzd2HWW0QrS8wuLSNViuOs\n7fc82/4j+iwS2AdsBj6hlHpSCPFTSeQB1JptIiFoRm2mwgbCsbFsi7jSAVfjVXfegSyEvPDyF/Oy\n227j45/6Bx78yUPoThaRwNj4CPsOHaDHKaCbOl/40ud4xRtesmYNL6jRpMET+x/hW1/+Ht/8wSc5\nfWKRA8cfpJAv4JouF2/ZyYc/9jGuv/kGDhzai6VJin0Fpo8eZ0fjEtZVO8xXz5C9dDtbrriK5Uf2\n0Tl0inXX9tNYqNDK2qw3cghXks92hWBzQhBmi6h2C0ybvmIvC1NnkEGClgY4vSWkkWd8+ya+v28v\nIozolRYX1CM++bFPkeoaoSawlY4RBQxi05PL0ghCcn5A2m6ipwmDmqKRprQiH1s3sc0C7XaNVrOB\ndCW+F4JhIB2JtGx0TUczDarlCieOn6Q+O828SOjPFQmShCSImQ9bFHSbickJwtVk2XHzuLqg2Wmj\n613hIil0xGp+4qSKgYEBavUG1WYdLwmQukmukKfRaAExrE5YplGIlGBZOpZrQhpQW16ATU/ft56c\nzcL8NE88ErJjxybiJOHxQyd50xt+Fr/Z4tI9e3jyxJM0PB9DN3BdF8dxGB0dJY4TUBonTk5w+uwM\nJhqbNl7AwIZRnvjRd3jNa25hONfPZ3/yQXRbMbtSQaUaOy/aSJ+7NsL9XzuL6rZv9wghCsD3hBA3\nP+t1JdbiIP7Vv/i3Lt7z8GHQJH4aY+Z13IECptBRtsYb3/ImzqycoBKYfOCj7+PWF97C6994O6Hu\nc/LULKePHmd04zp2X7GL3kKeRmeFa3Zs4WN//+dr1qg2A776jX9hbHwTj5z5Efd+/2GaVZN9B/fx\ntrf9Fvd881+49NpLeM8f/y6dJKRSWWRlZZZms8OfvPfPCcseyb5vkSnPMjEzQ+wJwmaHogThGiQJ\nKFNStGwqmmRhsYyjm4gwpiMlabtBVegMDK8jzDkUDI2+jIE50s/RekCaQJuEKdVhXSy49z3vZDxt\nMmcqhBeR1VN6pcRNFG61RSIFCoFITDQhyAgBlg6WhYwhlgK9MIDV9jnS6TBg57GEJBtrxJGPaMak\ngWRAagxpFiuWzSZDo08YTE6dYczUOetk0BKTOGyQtXKrDceISCXEiVot++okaYxmGgRhyHDfAPVa\nDTeTJUkTtFAgooCsCtg01k+PZZExbWzTIgw9NJVAEgMxk/d9ldFntc7DlTl2X7ybTRdcwOimDWRz\nBc5NzfPtu+5CtzTyg3lefemrOPrkSU6cOI1jZ3DdbJfBpVbj5S9/FbfedisZt4efPPgNAqdAeLRG\nJvEJow6//dY3s2VsjANHjyBsk1AKfjxxFp6VOz3b/sN9FqVUXQhxN3ApP6VEHsDNl2zHsW0q7Sae\nFByfn6OTauT6CrzothswD3anC295+5U8+OCjHJ88Ro0Zjp8+wq233sCxsyfJDxQQeUWtskLTk2zZ\nuAWegdL/4t/fRdbN8/Lbb2c4U+IPf+/9vOcP3g96H9/60j20vBV+5Y5f5eNf+V8cOnSUm264je//\nyzILs1V+4XVv5ZeveyF39PTQjGaR1SYZ08Yq5UnbdZaOTxJkdYY8RTXyaKQwt7CAzLo4YUJ2oA/D\nEsS5AoWNY6jp02R68sSdLjvjUq1Gp9Wm1EqRuk4n8UHzSFKD1I9Ypzm4SUyGBDNWxKnAkJJSKvCJ\nqFsKP04h0pGej60MJDH5jMuBVo3IsQhUCyPR8cIOfT09UKlhCQsDjU55miQMqCqToi7JVcqs780x\n3+jQNgNMTQAJKlEgFHEKlu2SpCnxqmYLKdhujmrgYeiQBG3yhmRdYZCirWMmXQUzmcakSZ3El+RN\np3tEtTPEcYBp2sThWkLuzz1wL4lmEKHQk5Tjh57k8ksu4k2/9PM4RRehaaQI4lDx7nf/Jnv37kfT\nJKlu0jMywOjmHoZGMtzz7W+jHJ0N4yX8Rx/jjjtfiic8XrJrD99+6CFu33MpBUPgWzHtyMewNH5w\n/D+pzyKE6ANipVRNCOEALwLew08pkQcQkNIK2nRMxWK1AbqF7wX05TJ88TtfodycpjrdJnv7z/Ca\nm3+OU+Wz8KTD9p0xmbzD6++8nUcP72VmepHqXIsjyx6nDy4+fSAElC/wkhZZN6JQENzzve/x3j/8\nXQ5NPsZHP/hloqDFb/3p/8vrX/tavv6VBzj86KfRsTB1B60pGD/dJkhiJl2Jr+n40qQWrBCHPtml\nJfxhh9q5BlGvg2u52MUcA+PjiMlZqq6DqUKmfcXSzBLlRoWjSxXypqQ/kmxqh8ykHTBlt8okbAKp\nQMbIUEMQoyUKRDeR1U2TdhIx+MY3ceRf7sZpNxFG1G0MxtC2JL5hIoOYtq2TAKGUiKQrRYhhkUgT\nTZkkSUo9aSMHh5mIIhqex7hp0qN07rzhBlTR4Z8e/BEISaxSJDoRqx15IVBpBKSkiSJFIBPI2Q79\njkWWGIEiCkP0fBHNsFCro8QJihU/QHgpWqeNoWvEvkcQru3gJ7oJSmEKgZKCi/bsYvvWrSwvTuNH\nFrliL7qVQWqKj3z0w+zbe5B3vfPdXH/ppRQKLicOHeTO172KX3rL66lUW5iNFsf9BWIr4IG//TzD\neobXXnMFiddBiZSiYSHiGBH/61L3f9hZgGHgM6t5i6QrwHrfqlzef1oiD8BxbVQa06g1SZOYIOiQ\nxhErS2VMcw8X7b6I6fwyH/3i5/iZyiKNVsREeYneQZP+kTxhGHDj5ddxvHAQ6QtkkBI+SyZPCQ3d\n1fjx/sfZdckuRoaHeXLucb7wtf/D33zww3z4kx/g3LkJDhw+TCHbwyW71hPHIQ/f/wSlnl4u71nP\n9P79rL/kEozBIvH0El7icyqukg8jBne/gCE1S9Bjka8lzJgxbqnI3EMHOUELpxVQIcaOQgaEhW2m\n7Ai68nUiEuQyWWaNlKlAYpYG0JZOEUaC2LDJEhLrOmEssFRKR8Vc8cbXs7DrJnaWCpz85N+hBTqm\n8JjVTK54129hBh3u/ZtPEJo6LhopGiYpkUyZT0J0BZ0wJDE0NMumlMliFXJYS1VymTya6fP4E9/l\nt3/nD/naQw8TewEK2Y0kSdxV/EpTNN1ACNWtwiWKjJNQsiS6UlQjgWna9Pb10GjUEGFAo94Gma4y\nr0ikUhhSYqY6nbRNxsysuW9asiqYtIpSVqSkpkaq68R+THVxiUSVcUt9IKCvr4dP/e9PcOjAflxD\nY8eOl9JcXqbY10u+N0umYNJz8XqylkuvlcMWGpVGhRNz5/DaIZfu3o2KIpT2UziLUuow8K+Yx5RS\nFX4KiTwAPU7IWxZuqoGuEzsGsYJOq0Z9pcLRs7PccsOLmDtd47vfe4A3/9rPU25X2Ta6kct3X8Yf\n/Okf8Y63/DrX7biNb339TxgfH2Npbq7r3qv2ujtfxg8ff5TJySk+/YUvMDZUwu6xEKHBn//NX7O4\nWOWFt93EuelpLrnsIjZv28wP7/8hUsUMuxnmDh+mV9c58PDDvODtv8p9P/gJQlPcdNttTM0tc/zh\nJxg2ejk3PYdfrnE4aZFKSae+SCljMIrBej+lGisGiv00luaRSJZFTD5jY7o2+XyJHZfeytDWyznw\n4XeTcqorr00eM9JJRQcVJ9hS48Evf51hvUTZKLHtze+iqeuYqcZAR7BUsehrVLDWjTK2qcgT9z/U\nBY+mEegG+UIR18mRUTqJY0AQPqUBqdKESAn6Bsc4d2Ivp4+e5R2/8Q6+8MnP0vZ8/DQmjCAMQ7LZ\nzCpETKDHAbZhMFbIkiUmigIaSUKYxrTLZSK/QxxHSM3owuq1LmpZ1w02btjI3Mw0Qdjmt//wf/D3\nBx986r7VFhdJJNi2i+VmECgsw2JgcBhDpERJgtR0pOl0dTBz3THL7RduJPYC0LvATmGaqEihNA2j\nb4ChrTu588+2sTQzz73f+Dr6coNixmbsZS9l4onH6Zza/+xtusaetw6+pgSpF1Jys/TYLpYUDPQV\nsCyT/fsOoQUWJ584Q3limdnpFX780KPUF+e4994f8JkvfJ6B/mG+8rWvUByUXH/ZHl5+241YeXPN\nGhfs3I5hxGSyOXRp8sIbbmJ0aJA3vfEX+eVX/wrr1m3hm9/8FrsuvJBXv+pVHD1xHK8Rc+0VV7BJ\nOmTbHmajQX+c8MAXv4jRadOsVWjPlVk5O8kVWi8zy4uMlJts8FOuFC57EpPLsr1c1BIk+LiWThw0\n6R8fouVKorxNX2LQq1uIlkerWiaqLJL1a1hpGz3pJwkdNr75Txh79x/QFopQ1xEipQdBvtKgkB3h\n2Lkmlpal4wva+SJuTvC9h76Hn9NoZxSRBUqTFIs9ZE0XFQSEXpNWs0alvEB9fo6oWiHtdPBadby5\nBaZPzyB8g20jm3jVK1/Jhs3rkELhWiamaVIsFmk0GoSeRxrHSKnRkwkpGAmJCkmkQogUPwrohG0i\nFRKlIUiF7VgYElxbozTQz85LrwSp2DA2gp1dC1OyTJ2saaACn8bSEgszM8ycnSAIIlqxQrMzaKbN\neRiBkHJVtk+j3el0CSzcDEopHF0jigKG1m8E20IfHmTo8l286Y9/n4tf+GImO4oTtZhfef8H6fSs\nndh8tj1vzuInEX4cEngeyvcoullyGZeBwR5edMuLWZhe4dDeg8xNTuBmLSYmppk4PoVGkQP7T9Ko\npZybavChT36Fr375e/zJ73+QrDm4Zo177/8On/rrTzMw0MsNN1/NX/zlR2k2Orzwymu46brLuOnm\ni7j+hsvRbY0//cD7yOYLHD12nMP79rJrbBtxJSAjsxREnkFNQNRhqKeXxuICnaU57Jl5qv4SI1HK\nVmExGKSsLM8zpyUI2yF1M/TaeYTnEdfrpPUWqRcSq5h2klDTUxZtjenH7+fcR98Hnkfd1FB0yOQH\niJbnSRITXxd4MqYqY7QdL6BWmSQuH+fYvh9R9yq0vQTKS4SzkxQOnWZ07ylu6h/C1LtsnSKJiBtt\n3FSgd3yy6OR0Az1OcXQNFYYMO1BAsGfLNpKZBUIR8xcf+RClYg7XtLBtmziOGRwcxLUdLNNCSUW/\nBe/75Ef42He+yY2vfQVS10g6HloUM1Lqo2BZmI5JJ+jgWia2qbO0soKVL6ALwdTiHKMX7Vhz39z+\nAlY+g2bqSAU5O8Nw/wC5fB47k+meRFZ5ms7DZ4QQxJ7CTGyqc3WqizXKM7PMHDvB2UNHqZybpb1Y\nRqkUIwYpdF7zO++if+d2cpGGTAPumV6rSfpse96cxYsUUbIKwFMaJF0N9nbN59tfvZtquUqr2SSb\ntQhXlhEtnatuugrdzTB/tsbxRydZmWiz95F9ZM0sjnCZPHJyzRrj20d41S++lqxbolNZ4jN/+0WG\nB8f426/+Hf/Ph97FHS+6g7CVMnN6iRdf/RrO7j9OwTaIgoDePbvQhgeoN5vMZzSUnadV7MceGqU1\nU+YSu0CqR9x2y0vo37aDwpaNjI6N0ghicgWTWnuZjBegqk1MPyGIEjzNZl/a4UkRczgL57wAo2eY\ndSOjhGGDpqbQog4ZaXP0Q7/MzBc+jKUFhCrB0wLGb7qDRidCCBfnouvIbr4UwywQdpZ58NtfxBAB\nbm8GU0nWbxnD0CySNCUbK6LIR2gCU1NYaYy0JB2vThRGGMKm7nWwLZvLt+wkHOlBFxrSMHjT299M\nj2lStHO4lo2WRLiOSRh5FDOCzet6cHNtQhnyure8i6Ghke4TX9NQUpAgicMEaboYAoYNhZA2Q0ND\nWDKlaGb5yPv/eu3mUDpK17ELOfIjJXLDJWTeIBEp+lOIge64M3SbnlEUEQU+sUyI4wC/1SJstEm9\nOoOjg7h9veT7ekFKQkMnVRoq7mDmu2ya+/ad5JqrL3nOPfu8QfSjKCERgNBIlcLMZLHCDlITVCKP\nUqmXTqVBp+EzMjLI4SeOkkifhbklXNOESCFViIp0Yj1h5zUX0ghrwNO1clO6mDrYtmDLRdv4l4e+\nw57tO9i16RLe/+d/xd/8709BrLj7q9+gE/j0aBahUFjC5sCX72anozOp+1hxjFMqMbp9C3sffoQc\nOplY0gma3PONb3PL+i3kG02E52G7Jj3WAKnusiw1WjJiUktAxQSDfURRwAW9w7g5l+W8z2SrgzM4\nQNWUyEh0u+QCRGqQCp2O7OBGJo7poukpBz//YZwoIJUpqbCQfpO60FDxCjnXoNHuYHRShlMXt+Uj\nkggMmzCKiZB0fA9L17HNLDZZtFwfahTaHY/UD5l4dC9LUrHrbT+PiBNuvfWF3PfVb1Crt0kSExWD\nlSiKjospEmYXW3z+A58kKY3xxv/2Lm597e2ceP8H0KWgp5hhZWmWWCt2pScMhSUVhVye8tw8pq6j\nlEbY6kDPM3fHailZii4tLOkqlax8qmv3bJCmaRiYha5aWam3B9Iud9hjP7qP9RfuQBkuSoguBS3x\n6khHwHvf+3s8evA0E7NzvO51r+b97/2Df3fPPm/O4ocxSpNg24RFi2mvjl3MYqQKo5PQqLdplBto\nls389ApaYNJebLF7yxamjTLLSxXcnE1jpUZiCBYWztGzrn/NGgcePo6INMrLZb5z932cOj3N8R0X\ncMeNb2D69CKH9z+JihLiToLUEqShUdQthryErbUWudFhyksp24Ri6shhint2krEtem2bTrNFFEVk\nhcF00GRIkzQ1nRmvw6Kdcq69QmhlMftcevQM5vpRxrJZTj75JO1mg9RrYbkOuVoVz9SoCEGoEmSa\nIAwTVIxQGrFMqUgDN+4wd99dbA4UJikrlk1h6wWcO/QD8oaOp3dZF1tJTITG/L7HufHqF/DE3oeJ\nwxjdsSmODNNKYjLFPhAC3Qcnk6PRaZI6NqVMjv5tm9j4gsuIkxQRxqSGzu1veCPfuOsHDOuw/4nH\nGRjqY2FxhrSt+OHZGR4/Uaant48HfryfX3rXr6M0ixDBQq2BryRposg5FhIflcSMrx8jn82hCUkU\nJeQya1HHT7FgAl2E1ersDP+6WqWecRSDVc6AOEGlCbOzE1x86SVI0yJa5RFTKGQiOPvEDzh6911c\n+9Jbue4FV7G8cQTNcJ5zzz5vzhJEMSQglUa75mPaGrWFJVSaEklFZamOFgksS8MLEnRhUZ5cJG/o\nlGcWWLdhnESGWHoeSxrUfY+w7sMzCF6OHjjN8PAgJ49MUF3q8PZfeycf++hf8t2vfx9ddDl3K75H\naiqcrIVScM3u7ewu6/Q/PEEyN8t1Oy6kfPwsw0IQ1eo0J87Qq5s005iCdCYvXQAAIABJREFUqVNK\nDWYadWqGQyhSqq7J1ou2Mp4oCtkStShCb1VYXJinNDpGvLJCJxSEYYvMyAD9SrHs+7QckyQVFISG\nSrpqXJqU7Hjx20lHtrDyjU+TX1ygriXEmkHu2psxxrfQPHMQkdRxM0Vk1CJKUxJdpxkpvvvj77Nu\n4zh2Q9BG0IkiOklCbX6hS5qhBJWgSbZQ4Gx5mVYS4ebzlEOPcMiFhSoXXn85nZklegqDoMfs3nkJ\njzzyAL19eZYrdVrSYTmWLC3WMBZWePhXfx0NiVIxU8t1DF1nZCCLrnWTfykTQr+F4+QQcYRlmqj0\n2awq4inxJKlACflvuMnqbz4zwqiUVGgIKUhTxeL8FCMb16GkjkwFqVBdfoJUMXHPdxmKPA588TNs\nW5pn6MobID/2nHv2+ZuU1DR0TRKnIZZlUe40aDU6ZDJ5WtUWtnDQLEXYqJBqgPAYKPQRegpdy7I0\nVycIWxSzWcxCTOR50MytcRbasHBunmIxS2W+zEc//Ndcc801/OIv/Cxve+ebaU+2cBILZ8DhnT/7\nRr7+5a9S7Mlx/N69uEqnlOjMnZ3AzGQY2riO+/ceYKuWRU86SKXIJylukjBcKNB3wUayroNZqxJr\nNmE+i392hnLWxl1YoqEJwlQRJwpdSIadIqnSqUYey4HH0LatTD02h9BMiBUtXaOUpjT9KsValc7S\nFAJFIhSJlEw8+kOSH99LRvpIy0EPOySajmEIojRFtww0PUt1eh7DyOAHsCITpB+QS3XymoalpWit\nKr1mhtkwYb1p0ittTk1O8+u/+z8ZNDPwdxCbOrVII418oiRA1yTVSgNpZZCxIk5DfJkQoQiUDih0\nTe/SqxoGZxfL7N4wjCZ0EBFefYVH9h/qTlQmnS6H9TMsEQKJWOX5+7fiyb9hKkUQEmMj4pSl2UnG\n1w0TYaCnEkOkBEgiYmi1ibwG1SQksRwOPfgDNCNl6KaXPOcSz19kCWNCrcvIHlVissLGsTWCICKv\npdhC0JftQevtoeN5OKZJb28Pi2GTyAswHJecbqE3OtSUzbqdG4i0tVQ2ofAxQ53IDxkb6eG3fuMd\nfOTvPsY/3a2xY9cu3A0Fzpw7xsWXbeRc8wSv/6WX8vUPf4lrsxfRqC3gWBaJ0Am8lHHdISXC0Rzi\nUMfI5DE6HmkS4tWr+JUSWhQimz4TDz2OrUHSbKER40qDTJySlzY1w6XWbtGfsQlSRS0MKS+22HjJ\nVZw0B2hJhRG10eKYWOk0H7yLqtfC1h06RhYRd4ijFqby6Uid0uAGnNhDiQTNNjCFTiAEKYJ23acT\n1THjBE1X1OoNRgaGCGpNDNMhCGM6WkxdpLTshLjUS8bOc5HSCQydBZFABGasMERAmAS4BniJQGaK\nJGgIATL0MbSInmKR4sAouq6zXFkm8HyCwGe0t49ESfrHN9GZmsDVbfKWwekwQotD2s+aUBSksMoS\ngwK16jwK9ZwRJkUiBSiZsDh7jj1XX087TNAsgSJBYmClCZ//099DD1u4loll2WhKsvfBB1kf/Bet\nhqWro6UJCkMpRmyHYdtm3HXYnutlfaGIkUYkkU/B1LHimPLsDAvLVUg14o6PI1xi1+TSF+0hIMDJ\nldascd0rdrP+whH8TkqlHfP4kUO85w/fxxtf8RpIOrzp11/HHddeyWsuuJnq3gonHznGC4sXMOBB\nftdW2LaJ2nA/0yMFzmkJI5qGaLVYLhV49Sc/wsr/x9x7x1t61fX+77Weuvs+vU7N1MykTBqkh0Ag\nlIDKRW4EAbGB3guKohIvgrQrCILG9lPAAiIBBEFMCCUkkARMbzOZfqacXvbZ/Wmr3D/2mUkmavT+\nuPeVu+Y1r9cpe+9nn/2s71rr2z7vXA4pLWXpEi8sEc0t4i/VsY0m68fHKPWVcIQg0xZH6V7otFIk\nE5KVqM1St0s3brNuvI+TjzzKWVt2cvHLfxzjeAiRoqxFRy2ioMA5b/o1dH+IcgSxESghqZ69k30V\ny75CSrvdwJycY0S6vO7FL+WtL3wZ73/RS1gnXXxpyBuFjBQplmbUpY0lDhwiAQ2rOKEld55c4LvH\npth3/BCbU7BZTGYydJzSyVJSq+mmCq0UWmU4ArJMkakEg6ZYyuP5vZ2lsVqnVquRpim1zipTi8vc\n+chBkuowi5miNFjBDXxcx2dm9swSfWdNnUaxpless54RmF4yFOSaAy+fUpqxIHGQxtJaXqS/UkTJ\nAoWwgCMkxoLBgIH+iuSO7z/AYjchQxDkCxQrw0z/8MFnnbPPHXJCOgjpIjEoa3rlBq7BCQQ61cRG\noQJJPYnJdVNKfp6ucikEHjpvSZI2jSRi5KwRltuLSC9k5njzjNqwc87ZwyN3/z0ELp2u4Jorb2Df\nsUNo1eD8C3bx7Xv/iWvTIre//WZYMSzEyxQGi+y58qXUZo5RcxO++/ij7N59Dnc9+kPOzzSulGyo\nlpm5405GopSq9dBKYByPOM2oaImXpfRXysx0O1TzJUwRvFaXfbMniXIFZBAiHBDCEIYh+W7Gqm7g\n50Pu/trfMSjzOCag47sYncfdtIW9xw7RbS2ilMZ1fToC5lfn6A6XyKzDDeu3c2F1mEY1h390nkIr\nxkZNfNfimp4fFDsOq4FDPXCQnTaJLwjyOSp+wOUvfAlP/PA+gr4iZxXKLNaXmZk9gjCGxhr6XFiL\nwCKEwjG6d89MgnTBcz2EkNRXG9Rqy7TX8B9aQKMVY6yk6Qo6BzpIL8eG44dIEk1ea4JCcMbcuOMv\nPktQrdA/McH49q2UxkdoG4srNL6QaCuwQoJ1cOjxuTMycMAXGXMzx5hYvw5sSmYEjjgVJnBIu3VK\noUdp/RaiqMt8llBdN4RxHcqVH7FE///W0FmPhBurnopJ6jqoQKLQrHbr5IMiWaboZoaB6gCqlZDL\n5REyYTVqEYY+aEWyuEpdhOhynvrCyhnG4tsB4q4l9C2EGZ/5ymfYvnMzV1x+Cb/3kffzvne+jeg7\ni7zhS//I1INTfPrd7+Dx1gmueewAc8kCHTdkd3WYhdos242LthofSWF6geN/ewsDOKSOBmvpNJvY\nvhKVQo7+RHF4eoZmluF0mjjVKr6UVCfH2LBuKycf/AFenGCa9Z6f00jI8j6222UybtD0JTmR4OoY\nnWrMwYOsnJjF+mWkrCO0pV0UtAsZAwNlZLXMw4cXWKd9RKdLPrXMi4h7Sm2ezClsBsoXWN+jbTLO\nOu8cWicWEJ5DVwpqqzXU44cZK1eYmjqJCFr0DVfQLmRKII3BpRfqt9airMIkEfmc3wthW4jjCM/1\nmZqeQ6UxcIpJacEaHLeHiVipdwlCzZ3f/S4ToYuHoLG4eMbcWGw0UMsrzB05xtTddzE4XuW8qy7B\nG51AFnIo6SJw0ULjZ5Z6rQmui+8FNNorrMzNsWHDWXTnFihPjp1WyTRC0Fw8wmo9oZkbYKQUYa1l\naqnNhkmX1D2zkuCZ4zkzltQYdKaQUqISRUSPH+IInxSXZqOFVj1HcaFRJ1RQdHNYmeI7Bmk1Vmu8\ntqLWWKa6fozR4IxgPV/8/Ofp7yuwZfM6atEq0zNT7HvyEe74zjfprwzwrVvv4dqNV/LQhz/KI/tn\nuCyocDz1qCyvsLtc5ptxjUhkXLAkCLspriPpEw7VxKCES2YVwvacU5MJjBVEnqRsXJ5cWWbDubuI\nG6tkhRxOZIn9PMuzi8zOzlK2liHroNOIYphnRRsGq8PMLC5ROedC4tknyc0v0ykl5CMN5Tz5XVtY\nuftOROhi1/WhA0ucdGmhuG84ZHb+OPmliGu3baf/2j2s2znO8rv/mHCxjVfOUXI8FpstmuUuzW4b\nN9V0pUQ5knrzOENbtrBx54s58viTnOP79AuP4yTkrYOWTxeD8LAmpdNdxXEsSAj8Hkio1Wrhil5n\npOM4vVDu2nN7dWGCJDMYJ0dDgl/KEz6jHaoRZ2CglWg8LYlbK6iZr1EKFf3lEl5fEeM5pDZgZnGV\n5WZKnDpEVrPxvPVMbN3Jn/7hX7J99Cyuf8NrcIseSrh4OmNlapZH9h8hFv1InRGU+vjirffzhpec\nT35s+Fnn7HNmLBaBWGtBdVyfjmMgcEmSDGEcfOHQTWMQhvmszuS6MZbShJFqAasNUcfgCo+mowhs\nheZUE+FmsP2pa4S6BVKzMDWFDV02nLuOiYkx9u7dx1mbt7Nlz4U8fHwfR77zZSauuoLVWw+zZXiE\nMAtYroQMzQsqxsFPu7iuZMx65LEYF4yVazKkEqkM1khc6dO3cT2DrTZ1Ui649HLu/eZXWViqY5eW\nWDYZo5NbKQoHqWIC6VLK5Wl2YzLfZdlJGbnkeQyPbOXJx+6HQBJmeRIvIevWmHvoYbzNfbSrAbFK\ncHDotBt0jOZA03C8KPntP/kwV1x8JRkZD37sTwj6BrGLMTLqImSA7xiWZuYolgtk7ZjBUoVIWHS9\nzvTjjxGNrSC15PD0DC+orOOLiwfIpDyd+wCwpgdedRwLxjnd+/7QQw/juh5Wq9Nq+5ZechB6xqON\nAgFbdu6mWMmzabDK9287EyDUijPA64nkoWnEsBDnqAqDv9DBpYUnBWlmaHsu850E1y3Qaq/ysjf9\nODLoo70siEdzGBxYgypZJPNTR+lmMJqPOVlv06wJJibHyLoJi/UzyXHPHM+Zg697hDq0BRO4ayJt\nKZHOwDVUh/NIX4CQVEarrHa7zJ9ssu/gNOdf/XxqSUSzk+OaH7uOrspI4gSpz9xGR0aHGCpUyCcO\nZVlmrDDJ2VvW49o85563g7/6u0/xrQceon7+Fo7deT+dfMAFZ11EJzBUtm1mfb6Kt9qhJR1Kk0NU\nogyhBdz4KsY/cRP1goOOUjrCIJXgRD3i6IljTE9NM3dyhn/Z+xgrUw3KnYSzggrrRY5OZ5U4W6Ho\nBxSMoOvBikyxLuydnkfXljn49T/DwWATQ4ahKwWrYYyqRHSSFvmszWi+gh9DGGkGu4INpX7ybcXf\n/sr7ed8bfgHVlrz41a9kOV7BqRQYcEKkSRnxXYTjkJ+YoOtp5manYHURP04Y8ENah07iakEclMlU\nm0qS0HbAOrpXZmLAOCBdMMKgychUwlK9SWwNxiYYm2FRaJNiTAZGYXUGVuNkGsdYoqjD0QPHeXLf\nEWRyJvg0SxUqTcniLknapWUSFulyzCQcTSQHEpcnupL92uNEIpiPEpbSiDaafF+OmZkVEp1BsQA5\niZAKFw9lY44cPkI3KDMqLciAfSdOsHF0iI6T8blb73nWOfvchY7XZHIybckigywaZKDwpKDbirGZ\nQBhJGlmSpYQoiSjmC2zavZGHDj1KcaKAWo04fHQf+YqkNDDISnzmytCWmkS3UZ4h9DVnX7KdxdpB\n/vLmm7FuzB1f/ife+8tvY/XaBn/5ul9lJ1Xc//Zyir9yAr73OLmsyWh/Ga8VUzzRYLYKJk1pff4r\nPPT5r2BJiAs+sUpJvS5quECrtcJOPEqdLueSoxVUaGWaw90lZuMW4/0DjOoccSelJV1qcUTiBFSr\nQ0yWB2i3O8TSsuncs0niBk8eP0JlZJBSILHCILXk5PGTdNMaeD6u7xN4AWlbg9R0XM2heo0o7XBk\nvsYXvnY7N732p0jjRZRboN1ukbVjVh7ax87BIdoFjyjqkPg5VrSm3DeAU62w8aIrmHrgLnZWynQO\nH6IbuCSuJZ+eyqSfGcY9tZOcEql4uvSRFhIrLEmc4LseY+sm6eqMuZV5TMfFqjNDtqlSGK1Q1pAp\njRA9zosSEmEtSmukdHA0+EGAUgYjM0rFkDTV7HviSTKjKJVKuK6DRaOEJhAeMwtLJFGTbmrIlVzS\n1JJzNJ1E02j9P6qiH2WaRNserNNKdCZQyqA1ZLFleaG9Bt8UuNbHlwKnpOg6XTbt2srzX3gVbtky\ntzTPtrMn6KQdkvDMeP2ei8+jOj5GcXyY+ZUVbvnyLfzw3gdR2vCLv/BW2krxwJEnmGvWeMWb3sA1\nuRH0Z7+BaHZppTEraZOuiZEoukrgd0JKbYfAZqy3Gmf9MJf+0Xt59fveR9sIOrOzRHOrbJA5trkF\nolqNg3qZh1dnqDsZ5TBHSTnEQ30cyGmOBUB/H+Pr1pOzltbxY2zuH6NfB5x8+Am6c3Nsf8FFbL1g\nF8L1SBPLkROLuG4f5XI/v/Oe96CNJtMpcVJneN0En/mnf+R1b/l5vGKFy656EUenD/NjN/4kq2mM\nu9RiY+pyLiFBN+LA8iwrFZ95T+IV8mzauIHQk7QWZmg9+SDZzDF2NDrctPv5BJ3e0apXt2ZO075O\nGYl5Sq/1tHbXKYXIMMwzPj7Jhg2buf7lN1AolxGu4JILzqfRWqKZJWfcN2U0SZaSad1TmrSQZopE\nKbpGE1tDYnsBh8xonCDAYNiyfQNhscTU8VmEIwlyYS9PY8FYg6NgqVGjMXuE0BHU2ymlQhkT1Yki\nwdj4v6mt8tTf9X9s9v9vDiUFKZZYpwin5+D5bgWTGKT1kCYg6aYYYtJOnTDvsOvCTUyu7+fB+x/l\n/nsfotlKaTQSdu3ew/rNW9gwsfuMa8wcrXFyZpnZ+VWK/gAXXXI507PLvPlNv8T2zRewtHeJm9/5\nEY79/i2MfuFBQg+C+x4h9TI6vkRkDpVY4oiAqL+fjb/7yzw2HOAkBrRDuBiRJYaZRp0+fMaMpOXB\nt5sn8bZs4JGjh5hpNOmvlOlXHnWj+IGtsf6srQyNTbJhcIKCBkFMe3GWUU8iCz6JNVgynCTl4gvO\n41vf+Q65QoFas8W5l15Exyo2bdvI/Q/9gMHhfv7ra1/JV/7xi/z15z7PF77yZc5//vMplXwOHnyC\nydENtJcahOUiOT9ASreXhQ8cWlaRGxxg13nnE6SK9sIcZaHwug2mF2YItpxFo9BHljhcMrgRq01P\nlFvr0wJ6WvdYnI7Tk1WtVqsUSkXWb9zA5OQkIyMjDA4NsVJbRUjJvr37ufLiy/gv199Aslpn+7b1\nDE+Mn3Hfoiwl0YokTUmy3jVTZUiVQSUKk4JVEEcpwvHItADtUBoqI0RAs91BCqiWKz3ZKOEgjWZx\nZo5UabadtR4f2H/kEMViERu1OXTsJP3VKs82nrudJY7odGOUFkRRSn21iU0U1WK1J8SGwUFTdHyq\npUGGCyN0DnUwtQy93KU2vYRRktff+BZeef1refXrX8HlP7brjGtcfcWLueDCS4jThLDo8flPfpal\nx5vMn2xwz99+k/21mEtbfewwIZ2laYqZpJtEtFzDXLYKToHEdJGu5Xkf/l1qF7yAK17yerx8gHAU\npUwx9VvvYf+f/jEykBRMkX7yjIYVHnroAWzBxw1CFkyHI6Fm8/B6tmzcQt/GdbS7KbXWKvHiIo2V\nFXKuw2C+yJNPPIybyzEhXdywytjkANu2bGKpdpjfuOnXWV2dZcuWceIkZseObbSyiBe85qcZ3X4x\nyyttbnz9z1AI83z243/MH7z5Z1n87J8RimMIJyMKBSdtRL3gMD44wK7qBMvTcyxNH6NuYsJmRuRI\n+vsHGR+YYPOuS2j5JWRflZe99DryUpLP5ynketJDvu8TBAGu4+N7ObAOnXZElmXMz8+zXFuh1qgz\nW1tmcuMGpOviu4apvT/gY7/767Q6TZCQD8/0NbMsIVURqe1xWLIsQ0iLkAbpGKBLSoQJNZloYWiS\nZZpqaQg/CPGdmEpYojiSRxiLtD3tzNnpY/z0z7yesmuxro+WgvFyhb3+GNe88e1c8/KXPOucfc58\nlsAVtOMuSdKlWCxSzudxrGB1eRVHOqgsYWx4iFB4EHqMbB+mbzKHG0iumVjHA/c9StbOuP0fvs7f\nf+qTnHPdOTTsmT7LP3/7nzj82EFU3dBwuvz8G/4bf/3uP0ELTTvI87wo48a0n/yJBiuZIH3dq2h/\n8jOITownQpYmRggXU8IYbr3pdyhccwnHb72dfiKkcrA66enrCkNZSxzPIdCKjusgyyEn6vOMX7gb\nsdxgXamMjA3t1TqJtmStFi0B436AMuCUCsyomMmREezJCE8olCc5emCKZjvm6mtezAc/+BHiNONX\nf/Ud3HzzJxjsn0TFCU5i+O6tX+fR++7CteBFCfXDxzl/dITWunFe+ppf4/Zv30infoT+/n4GvRz1\nVpNOlJG5BlldT5D2WI2mFJLgMj97lJOdFexikwMnTzJ0vMCOfB+PJ43eSs1ToeQeOxLAEgQ5LOq0\nQHe1WmW50WR2Zoq+Sh6TGc6a3EZ7dQdKKZK2wnfPFLdbWJxb496D47iwpmLpyV4vi8XDNZaMBGUT\nUm3xnQraxCjTCzwcPryPd/3y2/F9SRBIcrkQ4wiiWhPTddl/+BCdUoGlIzOM7dzMn3/lK0j97Lji\n58xYqjkfV2mM6CEKkiglH+Yo5Au0OzGh55BEXRAeKhXMzVpaeKSdhKyT0Vmpk3YUJ9IE42rabc30\n0vIZSckLLjmfdFFxbGEK33jc/KFPMJyvoNpdBoXDW8tbaaxo8o6lVvU459du5ORff4GubBH19bH9\nda9i6ff/DOPB+Nwy6iu3s9mmKO3R9gVuJkgcQVMocp7H+p94Keu27+bAX/0NL9t8NkeXarSKQxjj\noVaaLHaa6IVl3M2b2LRlPc2pEwgreqzFtEPHTclXB8lJByfwWQodvvGtO5ibb/DVf74NbQsYk/HR\nj34MTzr8j5vejV/I8etvfwdbB0Z51SuuZ6AYcHzqMJdfdw3brruKxEo0HsNBgcd9D6li2p0Yx1oq\nLtSNws3lKeZL6OVVarUmIk0IdUbJC6lOjFE/NsVA7HBZeYS98SpGGVgT57a2JxR+iggWRRGs8VaE\nEDTqdRzhoBJN0s1YN1jGpDFBmMOx4BgfnZ55wJEScCSOFPiei8SSKk2juUJ5aJCJdWO48Tye3wsz\nDE2uJ1ccZnr+BJnWnHf+hSzuP04gBUJrsq4maiVoeq3HsQ4InBxK5qh6OU7sP8H60WGuvuwKvvfN\n7/+7c/Y5M5bR8VEWF1do1Ju0ux1AUl9toa0lyVIkkriboqQCz0UvtGksK4ZHB6lNr2CVxLMhgesR\naYdHv7efsYmhM4wlWY1oLtVxrUe0mjBUquAYh2qxxE+744zMS8Zv/g32v+tDFFcjjv7FLQRJm3rV\noVtbRX/wcyxXM8rtDNf3yOkUKwWuHxBkMZk2tPwcrnTwY5jYuptudYiT7Tby8QPM1uuMTExy6InD\nDAuXQtGlWigRGU3OCel6IVGWkbMWaV0qmYNWljj0yceGG17zEyy6ijjJWO0sMDvfxjE+y8tLzE7P\nYLWmnnbItTs0nAaLUwe5+Kd+kvNe89JeWbu0FLIAogTKPle+4Fr2PfQI87NzuFJQsS4T5SL1NCXM\nhczPTRN6ISXpE1iN22rRNzROI8hR1oINxQJyPkESoE/lUdb0i+XpUvmeo3+qz8RaiysEngzQicvu\nzRtpt9oY4eJohevkOdW5cmpIaRBSgDBs3b6d177hjWzdsQO1dr12c4aP/e6bUGoZLTTTsycZHV/H\nhZe9GVfkeNELr8aVcMc3vkmUZnS6Ea5xyUmXVr2GirskLnTTBl5lhDDWHJ87xue+cman7TPHc2Ys\njx08hFY9fJ1wcr1QYWawBlzjoo3BGI9YGoTVCJkhpWXu2BxYB6sEwggK/WX6fB+VKfRi3FMqWxv3\nfvFOSn7IT731TfzTrbexdWSCo8cOckGhymWPRHhv+ynk4ACTjZTjWYz5w09Sz7qIWkZZWNr+AgXl\n4Dt5tLW0koysUiSuVmnXlinHGdoqfOvS77g88rE/YlmklKKEms7wQg9/eYmBMKRjFHGn26sGXGlT\n3lBmpV6nFPhMOjmmrWFed4miJoPVArlVn8/89S2cNF0cX+DmqsSpIBQpwlqSdqe3+ruKchaRNxH9\n4UHU8tfgxGZ0OIb0K8jiODossueGlzP1re8hU0UhDFlONJ5Xphr47Dt4mMLEOGO5kJwCYyyiWiJJ\nExabK6TFMoejFBlHPbw2EmOfYktaq3ul9EJgMWBBiKeJdmtBmiYU/ZiLd2/hS3fcjzQeRuTIjMDK\nM0v0r/+xVzI6MsHk5CTDYxWK/f091LgUODbl4x9+C4VCmyRx0ZnARIbV2Tp7776VotJMjkVcdbHL\nS1/+szjOEFKO0mon3HPPPdxz5z2sLLZpHTjA+ds3slhbYqivhFcaJyy53P/Q4//unH3u2orTXt2Q\nFOa0mDT0QnwI8PxT4BwIQhelEpQ1lHN95HN5sm6MShNe+9pXcsstt/Se+4wVavfWzQRBji/9zV8R\nlIrcc//3Gd28jk0nwfFyPHTzZyg5X2Wb7VWjLsVdhNdTFRG+pasgC3ykE5OpGOnnyF3yPLb94hup\nztX55k2/jadWcMkQUtFpRBRyITbr0PCLREnCwNI04+s38fjhg/hugMg0M1Gd7PAqF5fyCGU5qros\nGEWu3IdX9Fi3ojgqE7au201gu0TJEtJJsHmwjsWTDknOQRddylHMjmqVC0fHmf/2NO3bHmLQHKU5\nPsrKa7aw5ZUbsVay8+zzuOcPPwmZYqzUj1cyTLUbVOdrTEiHUqzJl0oo3SVKLdp12b5zK/mRcfyx\nMaqjE9x99w8RM4ewmJ5S5dPGUzvJUwCkUzuPKy2BELzoskuwWYrSDgoXMPiuQD+j3OXy615Cf38/\nR44cpB13KTu9+q5HH36AO7/3SVbaByh4pZ4MrJXYDLTt0pjfy5F9GxkaPJv+8iye+g7RiiX084iO\n5pIdBS49/zIKA1vxc1uI4i7fvvUOvnPXXdQaxxDe/6OYPHBwHBD2lKq6exqUKZ1eaUQYhPiei+dJ\nXK+K67pgBWkSk2RNBgcrPLH3Qfacv4skScgFHrfywOkrPLrvMJs2b+YXf/O3+KMP/wHXvOJ6Nt57\nnJe8/x10fu4PuCAGmzRZrKbIWNLUGVplBI6Dl3SYHtnGtTf9Nt/785txjuzFM4aZB+7h8UNPUPUL\nxEkN5UCgJG2raRUkFk0SlLn0F9+OJwX7P3szm/Mt4m4Hb6wfF0H6xt2ZAAAgAElEQVTJSuK5JpGU\nTKs2WV+JcqVIMt9mYr5ORMY5uSHmo4Rz9lzGI/d+mX53hZyb0Y180Bbf8TBzXV6eDXLVeVdhak3c\n5YhGyWXh+rMZefOPkcsNoIN+rE0xrQZurPBcH5PzSOYW6ZqUySBPn0qYq9fYuWsjnUbG7k1nIXM5\nrCOwZBBnJJnlqle8gr//1leJPI2D869ARqfGqZzLKSSEFIbRaolzNo4yVCrRaUeYnIfvS9rdFjvP\n2XnG8z/w8XfhuYLzzz+Xn3zl6zEmQIiUB354Jwce20eUuHTDmLGKj2NdslQRtw2OkCxFS7jBJRQG\nFY36QzhhzGpnP3kvJi8FjqyRtQ6TdYokieTaS/u4+vKfoFDZgXX6+Myn/v3W4ufMWBzPAStwsCB6\nTTvCSAqFAlJY/CDA8zyKvouUhiDnUi6HSBFibER//1kY3aVa7qc4MkTSiamvNuBpyeCtWzaRasOD\nX7+DDVsm2PDkIluPNTj8no8TqDa6JNF0KMYe87ku1i0jVps4NiMOcgyfs4OT24bZdeMbefxDv0GC\nIh9BsbOKEiuskJIzLr7SaAKGhGBeuLhWsFBbopXENFWO0uRGRkfyzB09gbPUZMuWbdw5s8SCTXl+\n/yhl32MqSdiTK/BIq8bg5k04kcs21+HLt32R87ecRWk6wA659B+bQYeSRGqcLGAGzZ9/5y5+8bfe\nRuudo5S3X4zn+ax+7S72D01x6SV7aEydZGBslK4DsuwzfWyacuowMTbGzPQsG8shm0crDA0V2TDR\nhysc8CRdo7FW4yYZSaeByPlUSzlanQZCuz3FyLWseu/41WvbFb3YP8L2DMf1BeiU/kIBWxhkYHiA\nmbl5yqOjlAc2oIuDZ8wNwQLGuOzf9zh/cPIPueSiS7jhRS/neRecx/fu/BytjqSQMziVHlQpXymi\noxTPhcUj32Whdjl37G3zyMN3c90LN3No732M91u2bBynrz+lWsxTrJ4H0X4c5omaB6H9IDr+0eRb\n/6+NfN7Dkw5SSMKwp7BezOUJPIF0oL+vn77+fvpKpd4RJ2rQ7bYxWmJxsCR4gUchyLFOFvEnJ3i4\nu/8MYxkZHsZPJXM25oLWIEM/eIIBFVKdj1gsCKrv+xWiX/koC6JXZLj+x1/D97/+92xuL2ETB+78\nF2oo9j/wGL7VCBwSbRBIOlZjfB+hJK5jaXoh0GEijWkJyeFbPk8cBDhxk9sOPtxj1s/W2FEcZO/s\ncTZfdB6LM/NIQtIkRpuEwOvDKxTQicL25Tl26EnOBiqNDoKAzTvP5ZD0aC8v0RzJs+vJFQYu2MF1\nr3016lUvoCBdstU2kQuZH3HJ2XuYOX6UkbN3ELWgf/04J5amiYWlGDgkSQe/P6Aw3s+60RHCIFi7\nJw6ZtTiOoJN0kW4XmyUIbbj+uuv55Je/uMaMF6dLX6QQ/0p5pRdSdsiHAeV8gEATWUG92SE/OMiu\nyy7lisuvotbq8rF733v6vnXaiz1WSuiR6JR//vY+fv+D70Z0oTwI0svTaM6T6Tx+KPEdFxE6GCKG\n/Tzfv/WLXP2qn0XXKtx7xzfwc03yOZ9GNEiYrmOs8mKsG5BxGKlj8mHaa8V+9lPYf85YhBAO8AAw\nba29QQjRD9wCbGBN69haW1977LuANwMaeJu19pv/1muevXMjOS9ACkFfuY/RkRFcJLmcSzfukGYZ\nSRyT6RYmM0ghyecLtJodpONRLvbRNzxAmYCRwiD75uYJChV4mtyxX6lw4caz2fD8PTzxxt9ksqVI\nSx42FXTcmIFXXcPKOz/BrJPhbbsY77++nhds38Kx97yH2GmTz5o4d9/JiMqIHAdlLakQpMLSEbb3\n8TlwUvtc+/6PkCtJbn/7L1HKUjablCjOOCFiSksttFDovhJHTMrA+Cjjk+uIooTpxTqF1Rrbz91J\nZzWjS4eC1cweOEQ18NkSCxaOT7PtpS/n0QceYVk1yXcE7/zdD9B+8//k5MEFBgqjyJbADkj8oQLt\n7/8Av2BJbMbw1t242qX2wIPMtVYQlQLtpSZpahl0U849a5L+fNCDop5SSjEGKQ2OBU+CyWJIExxt\nueqSK/nUP/zDaTiRXcN9n7KUp0jGT6muKJ0yVB0Cazgxc4zAsZRGxnHcPHfdfSff+NoX4YVPmxxW\nARFKL+M5eSpDObaeO86j35+iuRBTKEmEEszXLJVSl2KQEDo5lEnQTdi3/27iCM6/eBP+xC7y1Ygt\n55zFWVv2cPDALJnp4JlJhLsFo5poLbEiBfvsANb/7M7ydnpi36W1709h8j4ihPjNte9/6xmYvAng\n20KIbWuMlzPGNVddSi7MUSkUSTsRnuuh4p4TXypVWJifpzxYpd1t9eqDdIoxLsNDVaqVftpRRNIy\n1NyI/XN7cR2fgeHhHlppbbzxZ36OfHGAQ7fcxvCJDoW+QUq//Fae+L2bCRsJ2advQ+mI9Srg8NIU\npWMH+eYf/TEDuos0DnXPUsg0ie/ja4MSFpxeJbQ2DqGB0BiEB14iiLtdQi2xQpAKhSMkA5lHU7fZ\nOL4OsW4YN8yhVurM7z+Mci3TjRo3XriHqSMHaTtlVlVG0K4xqQUbwgItG+MZn/v2PsJ4PsfSbAuv\nWuYLH/worBxkPF+lu3uQ7pe+Sn55FbmpwHCkiS7eglcqUHtyjs9+7A+JWsu0ozY6KNLyPFaymKWl\nFfasX4ejnB40eK091wJWZwgJrvRQWiHSLkmrRehWKedLNKPOad/EAtasBWrgDD9GCIEWmvVjY2Rp\nyuY921n6xm1k8w7CGo4v7MeVZ7JAQ8/HlZK426UdK7buHkMNdRiYqDIzvcD8YpdSLs/RqYz1GwO6\nThdpDal2qB+pUW/5LP3gEHP1OldcPkp/0MfWDW/Cso4dZ4F1FIiQILwIETogmuh0is7Je380YxFC\nTAIvAz4IvGPtx68Erl77+m+AO9cM5jQmDzgmhDiFyXs6DQwAJ0vYsrUX5102ijiJibOIWq1FkMsz\nNDqK60gGBvrJsl6YshgWmdi8ncHJCYJiEWs1SdJAOj6OIxHChff/96euEfZhdIe5j32OcZkyU+zj\nwp+8hvGPf5pmfZ7VD32CTtrEkz7rp+d59L//EhUdEMs8qZtSSAHrIVNFI/RICxX6lheoW4XJSRwr\nyGUCKxMeeN+vYG0PR+oqH9wUV1mavqCdC1kpFZgoVWkvLNCIYuatxi8PsHPreSwliqUsReUMS80u\nm41Pv+My16pRzgJ0mFHudymcv4Pnq80c+sdv0F6ahwDud1o4H/gwkz94kvUdgVut0Ng8QnJ8jvv3\nfprjPixmKVGa0jIOoZLYJEEqaLgVHlhZ5crRKp50MFbTUw22OK6HFC6Ztigvwe3GqLLE4nLJnnP5\nzg/vwVqNo0xPLFGKp1Toe+Dg0ztMNchTDnrl85e+8EXkP/wX9AlJuz5LwfXopGeW6HuOSyglxgmo\nZwlplIIrmdg0RpQZZhe66NjQMTB7QhB4AdLRnJxuYmRAKe+yYaLKk4ePcWDqKP/zpteQxcfw/GGU\nl+GIHEkW4zkarUOsrOK7m2kmX/3RjAX4OPBO4OkNyj8yJu/Kl/4E1eoAQnpswWKURjoScLBWEqeK\nMAwBgxAOWEucRoRBgNaaDAO4hLkBrDEgBPoZ+lOO5/K93/ozLv3zD7H3599GtdHijle8jnM7bWqO\nResU5buYKMYan4yEOGhRTBxcY2mFoKTCtTle+Om/Zjrt8sB730V8fAbH9ErUMynRIkAZgyss0kiM\n49FxJI7OSKzC05Jmvcn04iLDY8M0soj5RguiiHqhyolul0aU4vgNtvUP4dVWiVWC11dkyXFYv+dc\nFj1JLnBRrmTzy67lnq98lSBTxDrlvrvu5bBV7A7LXCj6mTu6xG2deWo5aFuPttCEjkBbSb5UxLgC\nL4NUWZ6YOsHuwT68WPeijlLhyh4zRme6twBpDSojilq4ScB1V76A7957N0brXg+87EU0tV7L3K9t\nLKcQ3H2VCnHXEvRPoJwif/TZT/Hxm95Dzg9xvX4mh4b5PkdP3zfpl8hkinAMqpMxffwE5dIQyliQ\nkq3rJlg+dpK88umuRizGNQaGq+RLFaJkmcmRCZxWgwvPPpe9B4/wnfse49zL97Bw4s8YXv8zKNUm\nNIqureOJJk/84G/5/pcewcsf5dnGfwQzegWwaK19WAhxzb/1mP+/mLyP3/xpWJPjvPrqK7nmmqtQ\ngl7JvpTInEOKxTFrsh6A5+dQ1mCkc+oNoq3FmjVW+zMlPRPL6oOP0Pi511LpGAKRsH26RSJ6S189\n7mJzPtZ1wSiEtETKMOvkCBUMKghVhGdgbt9e+jduorO0QuB45EoFkmYLbTIyZXCtIMEipEu7GHLJ\nq2/g23/3+Z5jnFoGR0bIul2mG02MyfCkhSxh57lno63L9L7HUarFZL5M3Fyl7lvWbZhgeHIEp5Bj\nLCyS6ZRAhLQHqpz/kpdy9+23kVhDxwuZL0FTunxBziDGRnASQ6gkxqQUpUQaQ9vtKesLKdG61yff\nQfLDEzO85KwNSCuJU0MQCFwp8ByXNFUYHNJM4akOjh1gtH8IMo3nuphMYbXFSnHaT3mm0tfKwhLp\nxAADE+vwsxUqrf38/gfewnt++wNUh85B2zPLXTaO7ubAkSfIbBchcjRXEuaOTSFEFZmBtR3GJifR\nGvqqFWZPHsMPXC7YtIHVxhSbhobpK1XJT27g6OwyTxxu8O53/S0idXnH+/rI4h2Ewxci4mVu/+aD\n3PdAHVEYQ8idwBP/7kT+j3aWy4BXCiFeBoRAWQjxGf4PYPJ+573vQloXY2ENV4MjeoZh4PTHfcbH\nbkEIicOpbd6AUFjZS0jKZxjLtz7xcbY0Imqv/E3EGqnKTxULIeTiEka2yHSGZ0EZRUlITmT9XPDe\njzM2FPLP73wLG9OIxbyl/tF30ybPYNTmRDGgmsakjiZxDEqYNWkegUKx6ydvIN2ynmywD7mckNeG\nmZPTzGlNKjWVTFO1Ei8nqc9MY7wCTpzCwixyXDC0bR3rN02Cl+slWo3Fi1MCBd0iVDOPwvoN7Ln+\nZdy//wBNR+InKQeKLn1akOtq2r4m0RJpJalWWGnRBpJu3Fu9JGTWkuCwf3aZi0ZGGC5C4hiECJEy\no+j5BNrFGkuaKtK4TRy18HIBu7fv5OFDT/aOXuYp5x56haXCPiXaHeYKrDSXGNvch05WSeMmhWqZ\nmz71JX7h1a9mpK98xrnF1AM2DpxP5im8smX/E/soBz6H9h2jks9BmJCV8/z8L/0q0ydO8Dd/8v+h\nlyNy3iqNrM7uqy6ApMLSXEqjEdNOOyysRvzDZ36ARwB5RWxDTtz3GeTRr/E7/+NdLK1EDA+9ifd/\n4Ewu6dPHfwQzugm4ae2DuBr4dWvtTwshPsKPiMmzRmDQGNETfxZrp2VhBdI+bW16ZhOBPf3eAIFZ\nU1Xv/ezMOIL42veoRA6JaiEyjcay2BeS7ygaThfheripJm8cup5E6w7DQYCpRUwpS1fmSGQDpS2e\n9siJiBUHwiTGF10qmSbFJxUSbVWvTNb12P+5W0iswcsUygq041CJmlSGRjm6WiMvJF0yVGppLjSo\nODFZp4WqlBjeMcpouYInXbRI0TJAi16WWwvbq0lzXLAwNj7O1m6LgwsnMI6grA3GE7R1r6NSOGu7\ntOrVblkn4LH9MyhHoYTCGgcpDG3f5Wv7D/K6XTvxK6BSgZUpkRAIx+sh5JBgMowCpTQvuOpaHp86\nTKzjXm7FnLo9vZ57g17rz5f4ns/JuTbFgUHiFArnvhjcPIv334MTRTR4xn3rLDDSN4jrBcTNmPNH\nd+KERXZtOocHH3oQTQxZwu1f/Suk8HjrO95CLizz4GOP0J0fZLWxj03r/wv7T9yDyVqMjK5nqbPI\n1InPo1SbweHzGK5ezNbLP8KWS2+kduiL+KaIOFMq+1+N/908y6kj1e/xI2LyTq1CklNn3DVWuuyB\nPU8dqazpOYmnSiie9gI9DauezfR2HXvmziJShekmaEeRVxmrOcm5X/gct//cOwkPP4jjlOk4kqEb\nbqA+u4j7yH2Mp3UWPvU7HFUKS4u5gqSUhWSOoouDFmslOZlFGEnmu8hCBR0nuElKLCwyVfhYEitJ\npKTr+vTpmGxlgYlilZNJg6rvgQloJjFef45O0zA+PEzgewhr8aVAOxJtNYky4DroLAEs0utFmFzp\nsmvH2SzW67TSqKfiaAx+ziPLUpI4I0liLIYgB0GYkWZNMuWsaSD01OktcNxmTHXa7HLLJDlDYDRC\nu7hrCi2utSRRhEm7SFPlrI2bUGmKdMTanTs1Q8xaWFmenjBKhyDq1B+9l6WuZdvFl7K0Oku56uAV\nirjumfdNRS0Wuw1K+RyFYqWXhyu4dFPF88/djjKaVKWYnOXwsWN87ztfZ2xiA9orUR7cSRJrVlYe\n5/LLzubhe+5j8yaPTSN72Lrxp7FCI60PokMr+SKeIyhuuJTlH36SQws3Pevk/083f1lr77LWvnLt\n65q19kXW2m3W2hefyrGs/e5D1tot1tod1trb/73XO9VpBwIr1jALTztGnQpLSiFOO4r/6j2tZYlP\n/UY8488JM00qId61heN9Zay1tPYeoVwponJQTA1ZkGPw597I5Ft/nhkriRD4epltpsmo8OjEDqs2\nI7aWhjRE9NIAVjikjstK3mHPL93IC9/6BrS0FIwm8xRaGhLfIe4vcunPvI2m4+Iog5toBgsFCrGi\n7PsEjsALPDJSAg90HOGIXuNbr7heEzgaoWKkihEq6+U8VIpVGb4RXLnnIjwkmdI9IblUY4wgTTQg\nyOdDtFH4QQhIrOkJ1J3Oh6yVyn/zyBT1xBLpFJVlaG0Aie+6uDZDJAmq20GnKZ7jUMrnQZvT/fZP\nby2GtftpobG6yPDYEINDZTaVE5yjd8OhHzB1560E0qKe4dYaR+L7DsYmZFmD2uIUjYXD5EyL8bJP\nkHbYOjrCusIEV2x/Hlftej5jYYnWyRkc2cWVIzz52BGOPjnFdddehGsVlWJIrNpoacHtokyecvhC\nfCYIclcxcdnfcNEVz3tWG3juMHmOS29LEGv/JNaeClyubSxGU19Z7CXFML2SirVHWGtOG5CwrBnc\nmStUqmLq0rLoOPT/1KvBWJ5817vxD+/FBDkQmkENrTsfZuW+Rym5AQkSN80T2oAB4xB6JWqeT2Qy\nEpP2JpjrEHm9DrzBhuHe3/9Tvv0nf0kfBtdJGVKGfu3iRBmBlrSDMrFwsdLHJBHVVsKQn2e1XiMX\nuMwu1hkYGiZXcHs6alphtEZKcKTBFeCKnk+nVYoDGJWC0Ril6CuWefF1L8YgyKymHcfUmg3CMEcY\nhiRJgtUOWuXw/RLGgLFPVfpaa8lnhmVX8u0DT6KjhK5KSXWGNhlGKVxr8RzotpskSUyWZFy05wIK\nYX5NRtX+q//Qm2AZKTvO2U7cbSC9Mi3t0h9ozs2fJB9Cqs8UrDDSRwmPzLqkWpMrBEgfmvUllqeP\n4+uEtLHCZL/D1skS6wYEkxW4esd21g3Arm1nc+mFL+fxB06Qz7s06k3+5V/uRWcZrIFdldslziJc\n71w6di9N/wDSec2zztnnsJDyTKaGRWCtRq8duRyrqC0ucOdt/8hP3Hgj2AJGepxavNI0xXXd08cz\nKWTvnPy0kXvBtQSTg/h/+TXyA32YSCLcNrEymKE+dDOiqNssffT3OOoJiiImcyypm+LoAOkornjL\nO1nZtIu7PvBOZOMkZAlJMErWrONLBV7AmM7o+AJjBDnr4yHIDGhX43Zb/C/m3jzKrqu+8/3s4Qx3\nqlmlkixbsmVhA7bxxNQB4wBmCsYOYIbEIQmv6ZeG5CWkQ4bXeelOd9LphKYTaDIwpDMRhpgXZggQ\nMBhPeJ5tSZZkDVWlmu94pj29P05JtgTtlbfy3nK21l1VS7dK9+qevc/ev+/vO9z/l/+N2bKLaM7i\nVU5qoMorptMWWdbnit/7U3p//EHOmiuIidGypr2HoBDW1+CFTFg4vsj0lu14Z5EiBmtw3tMdOnSr\nwyuvuJIbPv93dCbaJI2YgRkxHA5RQtZ9kHyDrOgjhcJ6gccihdxsHAYin3C4ylnyoGwgmAopAo00\nResWyucIl+CcIVWa17zyVdxy+x31inA1cRLq936yky8lcSPi1ddeRSodx275KlXep3Ka3HkmxiUb\n2ak8E1tJnBAkcYyvHEoEEidRUhI1BHGkIIwY9lbJ8yHDYca2uTPYs20aqcYpnaXfdrzhNS9CJimX\nXfpKXIj4yB/+Njv2nM0FF76c3edehFIRlgHaPE6TFxGiLk83njlj8E0fKe8qKpdRmgGBCnyB9wOG\nRZeok/CGt/wUWVkfe8JTapI0TU+6iAA/cAQAOHTvIc5/42uZrhzmH29mFEoImiWjOfunfprhjq0I\nG5BuhBYF0chgrGCqlKTWEDtI1Dh900SMpUQKmpe+hnN/5c/pXPF6XFKR6YperPEuJyKwICPMZc9n\nIYZIB+KqYLq3Tog6TL7sWra+7mfqYNCgmSoc20eKJz7yMXpPzLO44klCoKM8UeSwVYUIJVo4NJa5\n2SlS6RGuRPoKYQp6a8tIZ3BFxkTa4i3XXAceisoxyEZ4AaW3oARSQppEeBxBBgT10ck5V1twC0dX\nBm649y5GNsaUJc57DBIfqto8r6ooB32E80y0tlDmo5P9ldNpLlAfxbJig9mzdvHVr9/J4Q3J2OxO\nhDFMSsXzztpDKHunXLfSBkrrKJygdBGF1eSVorQKp2IqL3FBUWQ90gQmJzTeb7C29AhLh+9m9eDd\njLsBF+44k+ds3cqEdySjNZ63bQtnBMn8nd/igVs+z2DtEH6wRLX2KN3uz+PVh552zj5jO4s1FVop\njDMkaQOCw1mHFKBlgowTpIzxCGJqtCCEgJICvIPg6wwPEfCbF/30QIJzRqvc9ZZ3M6MDpZSESFNp\nj1aSmYbi2NIamQoYCVmwpKnk4FgD09xK5/gqM7bHwT97P8fiiKg8SlRIioUFtocVVvI1SivQsUZY\nSxISrGrwo//2F6jOPw+TfILFO77NtErwwTBQgYsvu5hDx49RKYWxFZEIxJUjvf9WfHuOO+ZHHD88\nzyte8hyUHZHSREqLlwILREISnCGSULqSEDQTrSaE2m8Z4xEW/tWLXsrXvvtNqmGGkIq00UAjsMES\nxxEhz+AEr2szyNR6iXAWgadSTb537BhXnj2DzHJQLVRq6k/XVBT5gKIo0LrFtrktLG6sE8IJSiU8\nte2mBMzOzjE2cw6v/8XfwgvH97/1ORpTu/n8336aBw/spUpO1eC7uveI8Q6vBFKETVslSFWC8ZYg\nFcI1wAgQCi01wXtaTWhMeZrxBspV9IYZQSjGx2aZnJihmcRkpSdUx1i4+xHyookrAs30Kpqnucyc\nPp6xxRKnDQSgooSARKn4KQV6QAoAgThRl8Dm2dgj0PXkkLVir/4Bt0nAe3KMV5aSjMiBlQ2Oa8+O\nIrCuDHe//yNs9Q7hAxsiouktWQWv/r8+wPS5O7jzLT9Lx8e08lXOMBJDitCB1sZRHnn/LyKUo5mO\nUYQKoxyi8oxExT1f/CIXtd/OvXfeR8PFOPKaRh4k3//gfyavCmTIkWNtQtZjMg8M2mM05rbQX1rm\n7B+/nvu++0mefdFZjEU5jhTnJLFQBBc2hVJhk4/lkEiMsQhlkdoz1mox3pjiR1/4Mm6972ZcVRJr\n8MFRGAhaIpSGyiM34XqUxLmA9xCcxBG4b36JPXOT7G4ojMvQoYUjoEKg6o9wowFRa5x//VPv4nc/\n+AcgJdY72EQorQgkUUykE6659k1AE4thtHyAy59/CX/yR3/FrUcEldOMR6dOQ+dsbfVqAupEkpcA\n4z3R5pQ1XuCspxUlxKo+VlrpqRDMdlKec+k5yDGLV22iWBDiNkXZZP14QbayRpoVbB2PSaebOBMz\nckOy3qGnnbPPXM3ia5pEXeBvnnU3UZFNmcQpW/pJNCyIk3dDNmniUHvvitP5mg7wCpMoslhz1de/\nxndeci0JA5TpE7kE5wSRrDvwoxS6+/dzpjakLsN5QASM86QIKiXwrkR7UELQsJIoNMm0xOuSyHn8\n8lFu+sP/SCQ8oqURfUshE4IFUXWJlOaSH7uOg3v3sn5gH5U0xGVJSsFMMWT10BHyY6t83wkuuehc\nxqJiMzErBiRaBlxwRFLWGTfS4VxFMLXzhyKBUnHROecz3k648ZbvUGEoipzllXVaYx2cC3WNFzze\nh/pzD5tyYDZ3nCjm5nseY/tLX0CS55Q6QioNroIqo+z3ac0adp95DhEKL+rdL5yA8wkE74iE49o3\nXosiY617hKTq87lPfZXv3/h9XOEwPNnUPDGMqYA6wkKi8MoRRRohJVXlUApUEMRKUBYFIonAQdSA\nqlIsLVUMb93LS372J4EZvOgQSEjSmG3jijOeZTFCoXxOb+NxukuPMVrISfXTu7s8YzWL2KSE+6cg\nJ+Ipfwin0iaemncOJwnh9ZPBo7VG6/iU1yi1IVNN5CteQcDSd4YVa3He4bAoW3F4xxwFisSBK4Y8\n+ucf5Tvv/XfIoqLEUniHFYEgDAmO4C2lrUgLQ1dVbLv+ana96VpC2sYFT2ZyMhNYJ+Ky115H1d5O\nGSdYYfHagfDc/rW/Z3joYdqhoopr7+Du448zYQoev+0fEZViaRBx60PLdE2gNAZHhtUBoUFr0Fga\n0hMFjyYgrSGUFS7LKXob2GzEttYWXv7CH2Xl+AajUcWZ27bTjBLwDiECztrNBVPXLc65k9/bYDgu\nBQ8vLDEsDKOqgOCIdQBXUAzW6a9t4EvD1u1nEDatW1EKAigliaUj8pbx2JINDzM1oVHNcf7ybz7P\nE8vrjFzFRm+j7iP9kKG1Yjis7a1OSJSdsxhjGPQHDPOKoCJKYxmVFTZIqlrYiTUR3/74xzhy910E\nFyG9AilxMiLTGikjRDTJ5MyPsOu57+C8V72Hs1/+nqedsxGWxuAAACAASURBVM+cMbj3PJVRdkIX\ncbpWTfyvHifO3Cfi0wKI01LOyiSwMTPD1ndcgxqV3Pz6t7PHefqRxIcWomrwgj/+PdIXXEaQUEUB\nXwyxNuBw9clOBQahRGkNtsIHS648XVkRmg3iyd20p86g8iU5MI9lVcOOiWlu+eIXyNwAnUTkIWAR\nCAyJA+kiogtegD5jD5FsIJ3ASUfTGQKCuNVhYXmJ+/uSlcJTekfAUpQ9ZCgpswHeFURYNBXC1T0Y\n7TwNodAeQpBMtCZ551t/hunmRL17+ECn0YRALdPmSebDyRtSgNgFjEy5/eAh1iUMhgVKK7Ssez9V\nMcQWI7QXvOtn31kX+Sc5SpveCRJe/IKLGJtIaHc6WNPgC39/E06MEyVtlrsbyCgBebrJnqkflaHR\nSKmqCmMMVVVhXZ08JoXAescoy6hMQBDR7RVUxlO4AucGaKtZevA73PaF3ybkd7Bx7Nso2SMJsp4v\nGDw9vCwJ9g6y0d6nnbPP2GKprMUJtSlJtQQs4AhPWUH1YnA/9CGFB+EwpiQEgQ8KH5261NK3/QRz\nywMe+/TfENvA1MYGVhW0nCeVhmFHsPDZryIuOBOsJfExFsdI1cechUSyaDZ9AbI+LalwISELMesq\nY3k05Lsf/gMe/PgfMl95DrQb9N0IbR3l8nEGZeDsq9/J+KWvIBYKFwKOnDR4sqZk2+wO0l0XMpza\nSiQMbSPwPqKvJNXxgyTWY32T510yxaMbY4RiAykTRHC0Ww1SLUkFpFISSdCiJISMyowY9TcIxhBL\nzWRrgrdc/TYuevbzGZ/cRlU45FM6+CcalCcbwZHGC4VyhqFM+Mr+Y3Slp8xNXTuJgLKGfH0V4z07\n53agkwZCgvKOyNaRHOefdx7XvuxK3v1v3ssnPvY5Du07yuc+93nWSstKr0eqx7BGsLS8fsp101qC\n8FhXkRUjjLU4X7+/Is+RUYyLEqxxlGWJdwaPwyGpbE4aR1DFRJWkVzqmOrMM59dY3/s1RChxIcKJ\nPt4OWN3/JfqPvA8ZVmg1n/+0c/aZc6RUGk8g4OBEzSL4gZ3lJFrsN91DTkomwslckBN0GBNO3VrO\nuOJCVv7k80RfupOiKvFJoC81RoxRnbuNZO8+qs98hULDhNQMCHR9ReKhGySv/cAfMr/U5Ut/8Bvg\nI851jpYQLMSasWorLuQM7ZC1xlm85B2/yG1f+msS50jwdEKgLSVHvvJJ2kbTCJCJNh0DhRZM9C13\n3HETL/rV30eedS5rf/dnbC0sPZ9jtKn5XDrC9O6jx4XcuPdu1rc1uXg6ZrwhIfYomdS9JWtIZIzx\nrq5HdP3ZuHKIKzXaN2nGMZecdxmjEkRjjAfuuwthS4I3IDxaxyehX+/cJtsoIDys9YYs5Y5tqUNG\nEKlQd/iLEYPhBpEyPP/C53HbA3fWjOYkQijHytIxtm7t8Gd//XGQCUE3eGJ+hXxkUFJTVQZkoDyt\nmVwUBUmSIISk2tS6hBDQkULrCOPq99lKEhAOscmORtfw8sLSGuPNJkmqECaQP3qIbKUgVRWP3vxX\ntDWo8hC9FcVEMsHM7DjhyMP46X+pNYtweJfXDNWnbP//CyoZsPmcDyeNEJ7sEivw4P2paNjyu34b\nKwpWE0MQgbTy5MqRvvBCLv7YB1jSHmFGJOUQqx1aCRIdU3gYCsWR+SXWu12KUUkcGUoXaO15Ma97\n34d51qtfQZpKtjdbTBcl87d+i4mNI0ybAcpXlEESBcl0mZOIjFGzzSVv/99YGN9KQJJIwURRcN+f\nfoCNL3ySSTFgrVUwnJjGtafwUqNx7N+/wZ9+9j7yMMntC4p/2HeIlaAY5iXGFMhgaEpoSk8cKpJQ\nkXiD8iXaGLQtCHmPMMpo+cCrXvAStEvZvv0ctKqbuko/6YRf964ksElhIVB5ydfufojFLMcLRyw9\niQpoV5IPVunEDa5745tOYi5WemIVMcodv/vBP+ZNr7+Oa171Bl500QsRSKZnpk/e+EIQVPbUaz4s\nHCYoik3T8RN1VFUajPFYE/AORkVFZT3DosAEgXWCygqIJxmFlJFLcDQxJuXw/ID5ox0O3X+YIw8e\nYOnQFCZrkcuKDT+gXNqgPHIXTzeeuUzJssBrSbCOSEebF0o9SYw8MXwAKfDySVGRD/5UlCwEwJ9C\n4QDIqiFSSyaHFXmkyRWkztC/7W5Gj+wlyJRSZDS8xMYBWRboKK09AKTn1j/6PWTlacuUyib4aIg8\neCuPv/8RoqLgXO/IIo2ioNp3L2PBkqkGggIXt9m24zzWD+wjEoaOaXJ8YUQ36jAm1xgqR1xW6MXH\nkTrDes2y2s62y69AFAus3PN9pC9YJiUKEcp3yYTkvtyzessdvO6i5zEnJMIZokSjEaTUk1sBWVUi\nVZ0K7bzEA27omZSCt7z8Fdzw9S+xtKARosR5h9YaISRlVZM1BapO+yIghKJMOtyzbz/bOxcx3kzQ\nwuHwlPkAX9TGf3PTW1jprtDwIGSCJWb/mkMIg8mGdOIUFUM2ypBSoqXAeU9enRo5sXj8OHlRMNZq\nQiRqJAyJC57gHb4q0FoTC00wDmvKWsYQN0ELghNETiC8wwdBkAErN6iiDrmxWDuGTdbRSkBX4wpQ\nvkCFp4/JE093J///awghQjGaR+kIdAxENTfsxErxoUbLvENIe5LPQ4CyDCRxgtyUwUI4qflyPpD+\nTvvk69z3qRfSN4FGMDznE3/BV958PZNpEz/MUVJjgsHJTU19lHLlX/45n/nQB+jfcSvWOlpeIJVi\nKZS04pQpF2jlEaXMkYAUEUUk6abjrEQRybCPDiU9CZoJ+pMTnNduMzi6l9jGDKOSyie0Q04hGzgR\n0RR5feYWjnxuD4eOz5O2DDNTu1ldmyfb1mG1t05hKjSCICUm5Gyxntc+69ns2tIhji2tuI0JCqlj\nSp9ivMLgcULjRIRDoqMU2WhgkxjZ7vA3n/8o43OOkSspywKROwQxSxs9gm2wnvcZ9HI0CSpUSC+5\n/gUXc/n0OJVKKUKCaY7R3r6LdGKaxe4af/SxP8IHWe9Yqo7QM9bw+itfwqg/4K4HH0bFEWVR62qG\noxFCCA793OLJ67b7o7NIAq1UMzM5wfjEBFrVzVWpIYo1WksiGSFlQAlHrBWp0ggliLREykCkFZGO\ngIBSEikkUtU9vPFUE0uFFBWdMVBKEMUxL7z+I4QQTq8GgGfwGLa8vIKQehOgsXgsIViCACcUNoAX\nGojBJwgSpEyJdLwJh3lqyfGT/2aQp/4fS2J2vvud5L7F0dV1qm1zMD1JoxXjVYEVBuUA72iZnLVE\ncNk1P4X2EUopSinpBYdVkkEJ8twLQJs6I07WvQkZJA2nuejFL2TU0MResBVNWwyYHiyQHTtMhKhV\nmMkUHWnwQTKaPoOLfuE3Gc2dTRCBYAq68w+g7DyNdcH0G66jawNZb4Msy07CusYaLIJVrbjhwD6+\ndXABVTTweY5yFdgKHSoUGdIUCFMgTUGocqoqw2R9yDPizPLGV72MWB+nMz7Ajwa0Isv4mGHXLsm2\n2YoLzp/mzLlJdHDIIJBRys0PPkyxaUwhvMXlQ/LeOr4smRmfYmZsBrm5SKy1VFVFojT94YAHH3sU\nL+pdLwjBKMuIoohG41RjOxvq41lWGhaWltn3+AEOHT3G8uoaWVbiHTjn63CjqsI6cEGSV4bSOkrr\nqDyUtv4ZLxSVDTX/EElhDBulY6FvWB4FVkcwKAUrvX+hyV9TM7N4HzZZtlXdWPOeYApUyMEOkSHb\n9NF1hGAIwaB0jYSdgDuf2rjU/tTFsuPd72B5rMPKbMxjP/9b2I116GV1xIOTaN3iaGeMxVaH4D03\n/szP873f+jWmrKNtFDIofBAIJ8gabV7y3l9nQxWUSmHlZvPUOib7G3T/4RtsHxmkiklChHvOv6KY\nPB+hPJUT9HXgut/9H5SNNiI29ELAzJ5LodtU3jESCuE9KTFR4jl8442MTU/T62cUZVl3tJ0jEJCb\naNNIwB3LS/z1vXez5KEqK4SpEDZH+YKEQEuAMBXSVgRfYKoRohxR9jaYVDu4YPvL2ZZu55Lz5phs\nGKY6hrntTbbv7DA+p5nYKpja2iAISWU9C0XOcJNFgLfoUJF1V7FFTqwifuyq19FqNmum8yZgUGY5\nvfUNhqMhQYF1jvV+j6TZQEf6B+pUD1SulhsgFD5Isrzk6LF59u19nAMHnmBleZ3l9Q2KypJVlryo\nAIW3AWNroMNYR1E5KutwDsrKUFYGkFShtpgrKsmwkKz1LGv9U4m4p49nzmSvdaKTHBOpgDOeOE0B\nw+GDe5mYmKA/GLF9x27273sEpSCOAmONcYKMmJieqjlhIsWegEBPG4daMQf+4rPMbAwIFEwOHDIO\nxCqmFzRdoXjFxz6M7q9z6y/9Ms9aWaFsVizGgqhSSCcp4kBkG0Qq4osf+mPaRiGwBBGQxuB0YBgV\nKKkZDwkjr3AInvOaN9Gc28Gdv/0udJkxJOah/Y8yaLc5Y5ixc3We/f/+J2lR4ILDh5hhJBh3LUq6\nlI/cTdg6U/sKYzHOAoLgLVIoQhXwOLI48CCBjTvu452XXU4qKhqxJ46buLKiEoEkScBZhIgYWY8P\nBV70MdEWdm59Dn5+kfXhfmYmI/pVjnQKH2uUhKQTmAoxK6t9GqqJEi2ObnS5aHKCICVFBYm2jFaX\nQcXs3rkbV1iEDXhpEapmky+sraIaTfrDIVVhmJ6YwIWANdUPkGBDqNkIhfMEKVEIEhkhg0cJGHT7\nrK6uoHTMtu1bSeOINI4pxlpoCa1mjBcBHTRBGKzzaK1qW+AAioDwHiUUEChKSZ51QTz9cnjGFsvq\n8jxTs2eSW0sUaSKlyG2JkpK5neegRERzLFBmI8477zyCc1ifEekEFwTeB5TSSGEJHqyxhOjUBfPY\nR/6W2SpHlYbIKXqJwEmBloGZrMTHCe6hw0zunCAqIFMC4xokwuAiGOVgY4OMNKoa0dh3H4KEECxF\naNB66VWEfA1x/72EylEoh1CW2I547L+/n/T5l5B0B+TCkFjNox/8z7SUphu7ehe0lhAqTBKQDias\nwzb6TGeSo3KNbLVHJOv8Ex/CpuZnEz5X1Li6qZunxwR86MZv8+YrXshOZ5mwdb0VAqAlQSmk0ORC\n4L3De4fJC9JWm3POeBH9R4/RaJXQiOkWEUYZqipG0yQf9Gl1GpheCQhufugRLv7RKzG+RCcprsop\nB8sknQZazvC+X3gvf/bxj7AyWqMKhqQR0R3lDPMRCTFz09NUZYUkIGWMlKcultooXoKArLQkMqCd\nRCuJ8W4zFU6RqIj1xTWG2Yik0WD2rC200xTV9UxNdOi02iSJonIWqSSmqNnWCEsUN2nGirKqjcoF\ndajU041n7Bh2+8034V2FFgIfPDaUaA3BebRsQtBIGWPViKrKQMYoMYUXMcgUVL2j2OE6Tzx2Lwcf\nvpvbT8tTb3S7TOUlJSWlCiQ2MBGaVD9yAWvtDt6OuP/3/wv/8J73oqTDKE/whgjH9Juu4/IP/SnN\n3S8hKiPiqiIpS1LnUHj82Wej3vluzvq5X2YZBQq0kGg0NrZM2mWim75JoQNNNPiCXBv02XO0p7dQ\nBIOPPE6lNC57A/ELXkaXJs9+1fUsyQ5eSFxZMdaZIHiPtR67mer7ZOCpREqNd7VKdLUzxqfvfIBb\nDy3QrRzeQxIkOIOXFa7MiYVDh4okeEQ2JF/tgtnCc8/7CfKiQ5YbjJe4kSMblmSDCo1CWY8NHqsC\nR4oRyxbSJEVLg1Ye5SoGy0tU3Q0i53j3u97F2695K01adf5jNmJqrI2OA3nZx4uSICqktJvRd0+O\nJ2Xkou7/oOhXBUNvqRB4BFJpnLEoKem02mgpWTw6z8MPPcbeg0c5cGyZY4uLHF04ziAvyU2gsL4m\n7coGSsfkxuIEWGvJjaXwP7SuPzmesZ3ltde8heNHjzC3fQdCavCeUTYieDD5KlVVsbK8xLDXpdft\n0ev2CQiGo1XO272HhaPHaUYJ7bGE0ShnbutWOvpU2kT34rM4865jaKsx22c5WA2YXUq47N/9Il8Z\nfQh95/dRpk8DKGQgIJEiINA0kCw2O+y69MXs3fsQUQkOj1CBIC1udZGdxw7xxN4HGI80Idj6Zu8d\nURhnFFvGmglzr76Wo3//GRIMoT1GPx/RW1pjCknhSoIWnPcjV3Hbl27ASag6EzWzWAhE2qEzOY7o\nLmOtJdbRUyZSPYKX1ExfRWRhFFJuOr7OY+trvOXFl7IjihgTgmGw2MrjvMMHS6IjlPeIKsPYgqR5\nJs/a+Qa+e//f45sGHwS2qpiYjOnmGc54tNJYEciThK/fcRfXXX4JiZZUwdY7XplTra1SiIooabFn\nxy5+6X//eX7/g/+N3efuYXn5KK1mA2cD1pon+zinmeydCHb1HgQaJ8Eg8KYi8rqmHjnqmG9val6g\nVDRkE98UlAZW5wccPnScOImYmh5ndmqcVjNhfKxJkiY0ACkFMkhsMCSNCaw7tfVw+njGoOPPfeqv\nWJmfR0iB0BopBY20SafVRgqoTmDpShNFEd7XDaq8yrFFRRIlHD1yGBEFLrzgIoaDIc1mygvveOvJ\n17n1+X/B8ff8GWasjS1KLv+vv86D7/s9dv70G1m7cR/9h29GqfqCeSdY1ZbUS6a9Zz0SxGedy6GD\nh7HCMmk9KRIrPakVCAmZiLACIu9ousB6LBi79EV0u30mDh8E77E7n0tx7FEKM0S7Jgvn7GH37nMZ\nfv2zdFONDo6BalA5QyEqpG/SlQHbjJB5lxVnWFcOr2PSKMZTqxCtqPUt9dGsDhPSCIIHIQNJJOgQ\n+JkrXsJZkUN6y5pLyK3EK40QMaV1qLiBb7aQjUl0u0WvWuSxfbexd+MxmqllfHIC5xsMVz333bWX\n0jZARUQi55dfdiUzShAaCb1hiY7HcV5TNGKkTkk6M0xu3cYgZDz4yL3c8+DtdNf6BC+xrlaAWl9r\n/Q+860noeOdHtm3unjWDIAiPkgqha1RFeIdSkvEkJjiLkgItqF01Eci4BmaClxTeUIW6hhJ4mp2U\niZkOMxMtkkiTJBGR1sTNmKLy/I+Pf/VfHnR89MhhjKsFWwKJdwEpFVVpEEHSbLTxroaGiyInz3O6\nGxsMR3mdex4Cs3PbGB+bYm1tg4DAngZmNHvw3Z1NXvR/vodyNOKu//CHXPXf/wvrH/kyw/2300AQ\nnAdbU0Te/Ae/x9Wf/BQbUcSsC9h9B0h9jb6NIkccAgJHEDlO2PoIgaOIoR+BPWc3c298Gy/76etZ\nNjl9MtaP74cYSCJGWvPKV17JE7d8HaFrsqFwhsTk4EqEF0TesOfsc8gHGQGHkaBF3XuqhVVPhsq5\n4PHBccJfuNrUu1hnKZ1nHcEffe6z9OIYLwVaS1qxRouwKfSq+7kuK/H5BiHPmEi2csHuK7jqwrcz\nq/ewfGSN1Y15XNnl+RddSCMWBFPhUNz+2H76xiKCpNNIN30CSkIxwOVDitEGy8eO0JJNnnv+xVz3\nxp9mz3nPxjgDKjAyGT6ETefRU0fdJK15YpGqXWa820RApcIGwcYwIzeO0nuq4DG+RGyyyVMCTQQd\npRnTCY1I16aBhWH+yBJ337OPW77/CHfdfYj7Hz3C/kcXWDk+/IH38dTxjC2WUVaR5RWVCSgV02yO\noaOUtNE6aYOktaYsS8qyxBhDo9kkiqIalvQeFzxpu0PcbFH5wHr/1LTie9eOc82b38zNH/00ZRzY\n0l2nTAKt/gjl1mtBkRI4bxEEvvbt7+J1G58myMqSWF+7qQRLyziKSpNF44x0iyqAcqJGaqwndZAf\n75LML3PrJz7NlI9oec1sUTI0gs7Q0sRyz//8FDMm41jUoZrcihIBEWxtSqHBh5yVQ48x0xBkCEpR\ns7Mj96QllJTypCrxRP8l+DofxftAEDU861RgNLaFz912D5VMaMUxY+0GiVY1tSeKCdYhfSBkGWZj\nkTAo6DTOYLpxHi++4K2cv/ulTE1Ns+XMGaa2TPPc8/fgpccHze0Li+RCYIoKZyyRsEQ6MKYh8gU+\n62HzLv2l46TENNQkV73ian786mvJ8gHNhkYp+QNomLX2ZFyFEE8miT3VPxnAx5qeN/SKgl5VUApJ\ngccEgXMeHwyaQCoFjVjRbiakUUyiEprNaRp6HFMK1o/nHDiwwAP3/wtlHQ+yjMIYyrJibXWd1ZU1\njh05xuLiIqurK2RZxmAwwFqLlHXI0YkPKknqAFBrLcNRxuLxJVbX1rnpezef8hpTk2N02571McNW\n49BJyi2/9Bv0mkMaZUJRloy8pVTgBwa++m2+9OOvZ7Z0mEhgde2UKULdrZ99x/Vc+v4PMPfjP4nU\nGoNHerBCUknB2HCJhz76O4R9DyJCRTeCJRU448pXU42PUySGxG9A1ORFv/Dr7Ln6bRgiFKDwaBSR\nVsgyI7IGoTVRq0EzSeujl3mSKCpOIx/WLpMC8DhZE02TPOB1yh2PP8FACiItULEkjiVaKjAOV1mC\n90TCIfIeJlskii3ptpjMR7z48qtpt3Yi5lrMnL2DsVaTZjNCOE0/TTiwuoSpHIiAdRWYHBVKUukQ\nVU416LJ+/BBHDu0njjSTzS2cNXM+/+l9/5Wmb6KCPKnhPzGiqK49T7AATngtKK1RSp2UFhTeg1L4\nJKYQkpXMsJo5CpEwdIFcWvJQUZqiZknYCrxHi5iJWDDdiphIYbKpaDRaNJrJ087ZZ2yxtBoxPhgK\nU5CbimGWMxrlrK13OXZ8hX0HD3N0YY1jCyscnT/ORq/PIBthTEG316dX5gyMI8iUm26+lW1zc2yb\nmzvlNZpRinCS51z9GlYbtbl4x0XIODBqdcifvZOXfPoGBueeh8Kg8py2kPSzghJPZDSonEomNJqK\n2bExis44US4gSIK2KJsRbAUikKqUyEe0TaDbsMQDuPwXf4UzX3U1PdGgbSOGiaTvA0v33csjn/0b\nTGFA15JYW5Z4EeFErX/JZMWu7dvYMjeLFA69uYtY506aoMvNRSSokUSCQG7uQl4FUp8ROglHRw4Z\nS2JhEA6ch+Bz2rHF25zK2ZoFPhrSf2I/7dBgbHqK3iDmiovfxkRzD1MzZ+KEY7bVxskMCsF9h46y\n0svITUXpRiAlAgdYWg1NLA2pN7RsSffwE/SWl5jasoWltSFveN1rkFj4ITtLCAEfajCCEHDW186j\nwSCdpcKQikBDpRAihJYoqUEKVgYjNgpLN6tqlx3rKStHZWvBm3YVo7wiK0q8raXSiYDkNF3N6eOZ\ni8nbZJMaZxHUd4y8yImiaHM30YDFR3Fd7KsGc9tmsTYjKIfxgiNHj/Dww4+SxJrtZ+ygcvYUX+de\nUTGsMo70e7jLnsXwngNsyQzLvsFkWbC8NKQFnLlzJ9kjj9Wnv+Cw3hPJiJUIQNAyOYUQPPixjzD6\n/N+SbAwRzhEJgWOCntdMaY0qB/hYYKQg9hFtnXDvJz7D9HPPpxR9DJ5WofGyYPStbzBGxvGxcV7y\nghdz903fpNw6g11dIUokpTMYAlVwTE5PsbCwSFVaXGXQSU3HD84hlDp5XBFSbnqr1apT6wJxFNG3\nA/Y+cZiLZ5+LDHVumncG6xyRhERAWTkSJcjzDO8Ua3sfIp7bgYpS3ChiariTu+/7Jt3+ccoyILRE\n2pKQChaHC3gxQVNJfAJRolGxxJqcNAhi5QheUvZyqrwLjQbn7jqLQ0e6tfnED2kGel/Xkh4I0hLF\nGkGgIRSxBuENthTEURPhDMFHoDwhKKQUVEFgfEQ+smgCcazQ1D0cH0TdqBTgawp7rWOX/0Kh4xP0\nDesszlsU9VGrLHKiSJ4kSSqVcuaZZwCBlZVlwFJZx3duupUtc1v51d/4Db7zrW/xt5/6O6644kdO\neY1/+MY3GHU3WCiGXP/atzIcOFr37Wf27T/D/Cf+nLlRxRff+S7apSUREi88Igg00J/dwqv//W/y\n0d98L53eEGE1TTegtdjDExhqhTTQP/ssrnjfr5IffJS9H/4QbeuoooQ4VLjIM768AIvzjOuc4bYd\njNJJovkHaJY9Mik589XXMpzdTn77bTTdBEM1xApDKT1GBKwWDDcBjjhK8b6WKcgAUtfGHScdO0PA\nB49SJ1IForqTLjQH5hfIzfkIX9se1YvJoWNJFBwj62joBK0UlQmYXp+h3cfEtm1Ixtgzcy5N1+PL\nB7+CrSyuNGjVIB7vsJYUNKxBk2BsiQ+GyNUqR6kUwgaUqJvIwRj6T+xn+6WX0Wy1GZ+aZnW9+wNz\nI4RarSqlhLJgttNmohWYnknZsqPDlmdvpTmeAi2+89X97HtwhcpnKCUwjprRHASogPGWyjmEMWgE\nWmriON68KYIWgPenb3A/MP6pMXlPAH1qmaIJIbzgnxuVVx8dAjPTUzzrWc9h8dgC2Shn97k7Wd9Y\nYqzVRmtFu9VBCMFg0EdHDb79jzcSJxFbZqbwzrPe7fFjP/Y6Dh86yJatpzo7/8TbfoJdZ2zn8cWj\nVD3L0Z2zqL2Hef4bXscjN3+d+OgTbO0GfCwok4C2aU3/14FQRgx0ysz0bsz6w3irWYstjSoAMbEt\n8UIR7zqDvNmgYSKElIwijwoO4RVGO2IJQ18SGUEYeK56/bXc8/H9ZH5AKpsMvnM7x7I1AsusLK2Q\nNBrsOncPS/sfxgsojMEJmNkyw6g7IAiBqyp0EuOecnXDpkyhfkhCAC9rtraWmqP9Xm3tamsSY6wb\nyE3zj4YMFHGCKXKazTbBUMuYM8docZ5Wsk7nrPM4+MA6V132evrqGMNkkdbkLI8+egfLSz2aPkE7\njfKBoqwI1pKkCYmOmIwbNIBIKazJsdUyy8fm6VcVMknxp1GVgvcIKRFS4b3l7KlpJgUkJRSLffqN\ngFuRzIztIo1y3vjOy1hZ6PIXH/weprK1CZ+QaGzt8SAlLkiC0hgfsDaQuwwRINUKKaChxKa44Z+5\nWKjrxytDCE/Vf/6zovLO2XUmExMTeB9YOPYErVaTJ11nxwAAIABJREFUs3aeg7UlOtqCcoJgA1WV\n0+v1EFHCP3z5a0xNT/Hyq67gczf8Pb/1H36Hr/zj17n0eW/me7ccJW6fyl7tNFrcctttRAraiWb6\nnDmESHjksx/HVbWJQalrnpAXlqFMmU4UovTE66t8+dfex7A7JJEWfeYO4vkFouAplMOpiMQpunfe\nzC37HiIZjBi3Bh00MnJ0t55D4/AByibEokGmAknR5c6/+iCJUgQf1SrFwTwaw0g2SZKAcJ6ji0cp\nZMB68M7htEA3IkS/3vkkgeDs5rHrSSOP2gmSkxLhEAJGa3RVQ60PHjvOCydblM2EhjUMQ4wJlvE4\nIbaBSkmkd4xFiqzZwIQxRsvzTHaGHLy/ZCr2FAcPMTO1hTPm2hxfXeAVL38NpfB8+2++hJYxbTOG\nlikhgdGwoF8VjOINto9NMSkhENdF99o8mWhzxo7zWFk+FcXUclO6LKATKcalIBYVQSpSnWAHgdGa\nYGV0jHZLstc9wvRUwlvfeRmf+Z93IrBYoxAqoFxN+VFC1tJkPEGJmiArBAPvEEHQqwzqNKXt6eP/\nTYF/+oHuDdQReWx+vXbz+5NReSGEJ4ATUXmnjJ07d21ChI7t26c5Y/ss3e4SztbFqsVThsDy8SHL\ny0OGA8G117yNV77uDQyyiMktu/nyV77D1m2zfP0b32TX2btYWzp+ymvs3XcvIYwIvuT42nGiKCZr\nB/KvfZ2tx56orZMk4KGftHn7Z26g+ePXYuIG2hWcuTZkxhhkGtHrrtT1gFAkool2kEclM8MhZx+d\nZ1vWo2EdHku+9RzO/fUPcNl/+jCRiBDCEKuALj3NAJW3uNlpMq2wwWKCx3qBCp5YKdaHA6wQCF8T\n/wgBFUckaeNkWJAz9immdqc638CTGfTG1l1pqRTfvPdOBiKgXcCoWtIbfAAsSaRrp34ZaKaqVnk+\new8+jsiP92iZZS5+5Yu4/9g+Fg8+RnbfClMHtjH/wCL3PHAPr3vzNQxlycYwo2sqcgsrZsSBcoXV\n4DjUXSevciLhUc4To9g+O83rfvSVdbLYU4cSSOFoBs+2ThsRHEbKGp2UoEWDdrKVIw8uc8/NjzLq\nDskHfVw8z6/87jvYds44Kgikc4gTxucCtKpDmrQUxFqiZCASIGVARgojn/4c9k9dLIF6h7hLCPGu\nzb97uqi8Y0/53R8alXf+BRfwt5/8JFk2osoL+hsbTI6N48qKfm9EXjluuv37HDp0kCRucPFFF3Hj\njd9l8cgRytGAs3fNMjcniVyJNkMOPfIA/dVTF0tZDKnyAmxgIp2idBVzV1xAq1fVBR111xutSF1g\nQwYuv/LH6Jclg0jghGGt4bGVJi3r862NE9aFYD1q0og6lLGgTD2FDAy0oogcUhtmRcWhh29hFIo6\nWwVHJRwDRigds72xjaoyjLylCAGLxRiLFnXMhNX1BHdFVVPzlaA1NbYpA9ZILakqc9Lp5uQF3YRZ\nT/DH6h5GQIbA4UGGn+pQbqzTbCqUTvBOEKeCiVYDIVMsMOgdoxXWESuHCPSpJgvyxLIyKnmkyvny\nEw9xw76b+NJj3+PAvoOsP7TIA3ffwfjWSVwaUyHoZZYjgwFrbcP9rPIIQ7679yAb/YzIebJRH60E\nx48cZktr4pTr5nSAYDZTlB2NRkQjrg0qjPdYWzDekbzq6st52SsvZvdzzmXb7j1Mb3s2E2du4f0f\n/I80GjmKTWQuOLw1CGdJtKQRKSIBka6FYomSNHVCov+/gY5/JIRwCfBa4D1CiJc+9cnNDJan4838\nwHPv+5VfI007/N3//QUOPHGUlZU19j92gNwG1vp9Dh46zGg05GWvuJKrr309Dz5yL89+7i6+8YUv\nMDfdYWpM4aoew/V1dPBs2zJGdJoXkkTQTBtEcYw1JWdOTDB78YUg2xgiqqA4kErGRYOGN/zd9dfx\nyX/706Q+w3hOevY3jCbO6y6xQ/DSX/k/uObDf8JgaiutkGKFAhchraRRSZoL89z2yz9L94s3kDhL\npCTD9jjn/uzPo2JJLgTsfQKV1kZ5xpYEZXFaI0JgQK21iGWond+tR1PTPNCyvltaTxxpwBNE/Thp\no3aiaXdysbDJr9L8wV/cQJ/AysIiiR+iQkHhAlIUhJBx9NhBijJn8cijrO5/gFBuIGNBU2iqhRV2\nbT8POzbJfNJhrxuxsl5yyc7nIvKY3kZGZkoKA3npcMJRaEtpRxhTkeWO2+9+iN5gjaga0D22wM65\ns7jo0gtOvW4+0NIRqfJYU2GsA2fpNBPSsYTQCjz+2N08/sSdZHaN9f6A+eNdyqqNjrZwfOMI//rn\nfoGgA0kkibUgitQmC8DhrEWFmvZf5hXr3T7r3Vpv83Tjn1SzhBAWN7+uCCE+R32s+mdF5f2b699C\nXpYsHl8k7TRptcfY++h+7rrjDgb/D3NvHmX5edZ3ft7tt9ylbq3dre6WulsrsmzLWyxhYxxscAwm\nxBiDk3M4h7CETEzCBDJMksMAw4ThzCRxgAMGQzw4hASwWWxjbAOyLW/YkmzZliVr39V7dS13+23v\nNn+8t0rdshHMkHPE+09X36q6dW/V+7zL83yfz3dW8dIbX8Kx9YNs7W5z3wP3MFjSHB4e4dj3vYXz\n555Ca8XuOPVCCCkIwTIcDi79IVGQZQbvO0QueGI2Jm86OtOyVAdmJw7w9//j27jzB36M6fQsG75E\nRo2XGYUPTAgMraBWligVQXr6Nk3kc7kmrA4xp86gVUZUDUZUdAwJ1lLGLjl24al8yfPe+L3om9/I\n+Hd/h3W7y05fojpFPLpOdeYphlFy3Te8kodu+wyVEogQsTjWeppxN2X14FEIJStHDvDwl+4jRyaL\nCSCGZBEYQwRxcUPc01YQRIdRmi1K3n7Lp/i7L7ieA0sK7Vs2t04StcHrjMwEpl1HJLI1Ps+5s6e4\n/CU30PiGEC1XHTnC9ctPce2xF/Ar7/sEy4evJWrB+tIy60sDwgHQaLY3t9g6m7OzPWFQ9hBe0imo\nYuCpzS0OI8miph4vc8XllzKGNYZcpTS4jolEaXqC/kihRjltHrEOxhcmdHNLORwgpOJc8RhXnThB\nf7TFd/7Dt/I7v//fcZOOtvPIqBZHTpV+Rz75kS4vLaGMQucShWZr51JI+f+nYBFC9AAVY5wKIfrA\n64CfJVnifR//P63yJlVN6yxHjh9nuLTEbbfdzmu+9e/xu//lXVx11fM4fvkGRw5dxsaxK3DacO/n\nP8cLbriGJx9/kLPnnmI6aVgerZEVgq6r0DqjbS/dWbKswHvIdMGsmuM6Szi9RacFq1rhz2xTTxv6\no2XkNMMicApCsFgvOdvPCC145cAICmFAwu2//A7ycoW8OksrHYXXWCmYRyhlAisgRZKgoBiECfd+\n/I85gqNwLYUX7Fx1BfGs5eqXXsn5D58jC2nib2uPj6AQZFnGddddzix4Aop63iBiJO8XdNMGnEca\nhfMBpdUlJqh7AEJg/+MQIlV01CHn3Xfdx1BZvv1Vr6CwMG9afLPDNSeOM3M1VRU4eW4T3VgGIpJn\nmsZblvuSW794P194YgtMn8NLBdFJvAgoadMxUDasHVzm6w++FJdHOtcwOT/l3JNnmdcVp2Y7eAkH\ntWL2uGL5mqu52C3EIcjKHoQ67Y7ConKF6QvykUwk21ZyWK3ROIubN0Qtme+eZbL1ZQ6sDtk4MCK4\ngIgLpXUIZKZA6eSPoWJO6yxV3ZApRSYXjm7PMv46O8tB4L2Lc7EG/nuM8c+FEJ/nb2CVd/99D9BJ\nOH/hArPphH/zb/41y+trvOZbX8vx41eydX6LzfEuG+YYs8k5umaXj3z4g0znU2KEldUB8/kuB4aH\ngVRbeKYEZDqZpYyQc0gfaYNl0Ea+6V/+Kz77ex9g46FHuf1HfoSN2uMxBBJDNwZBfPlNvOEHv5ff\n+4mfxbtTeNnDe0mnPUu2oqznNLGhyjQqtBTf8Dqe94Z/wN3v+BXCow+lFyAgykgtJYNHnmD79Nsp\nvGRcekIrecWP/y98+pf/Lash4oRl8vBj7MYWqw1ZBCEFTbeFjIZlPaJpA5PYcv0LrufOz92JQSOi\nIESSA4FWl1hwxJjQRt77BABREeU6vBLUUqNkzh9+8vMcWVtjd7bFtz7/emTXEWyHlZrpZMZ1R1bp\nSU2n+ui2pV9OefGrr+LL901xjUYGh/QCpEGIgJB7vGqLio7SQRCSlctWuOzgCCGgax3nzoz5yrmz\njJqAWF2Hi64tb/rut/DwF25jdvZxlAmILJL1Fbp06EFA6ojWktAJ+sNBauCK4GuLm57iyIFX4F3B\nYClnUjVIFVFaEYIlukToFyiyPGNeTSn6JVoE/ioB/l8ZLDHGx4AXfY3Ht4Fv/ku+5+eBn3+255Wm\n4I5P3c7qxjqvetUruPeee7jry1/iyitPcGjtELatWT+4zh2fupUnHn+crmmJQN/0QEDddqjcMJuf\np+wt0XWeumou+RlNl8R4TRdom5a6nTCbNHz2vX/K1/3Q9/LgT/80y00CIETVEoPGuAqtCy6Md8iU\noc5AdBb1gmuQ65fBR26hzTQEnwyFosfqHlff/ErOD0csn7iKM08+Qa+NTDNA1gxbRZcLZBOI0mF9\nn+GFTSZ/8h5WJ4HzwTIUhs0L52h1JI+pSWmpn7O0dAAVI67zeG25auUQJ+cTjh8/wsknz+C8QWHQ\nsqWzPtEaYyBquZC9pIUEQgJsyAWlXghsCHQCHtnZxpnAXacu0D+aUVVj7txtOL27ywtvsIT6KKYH\ncRrJVnZQasyVRzTLQjLoGZSyBBfpnESrVHXPqfFEKJexTYW1EYencZbRYI3Lj5UcObRM5w2b5x+6\nJFgQnmNXHuOeMydpcfRKhcgt+bCHlwJvLZky+Kwjyg5hIkYojDP4eJbV0fOS/0wE1fMc6OccPbbO\n8lqPldUR11zzdXzwT+7l9k/fxaEjfdx0hDO7yL8iHJ6zCv6dX7yL2bzGlHPe8wfv403f9UbmdcvK\n6hqf/vRnePnLX84HPvBhXDXDZBlFWaRjjVRkWYa3Ch8sSMMTT51GKbOw3nt6PPrEk2itcd6ChywE\nQhbJzj7F53/+51h2qckpSChDxlY+wJsKEwJLTzzB7/2zt7LsBXk25Mbv+sd0oyUe+8qD5KcuYE1L\nI8AjUXbObb/wn9Crl2E2H8OZGcEJMlHg45BZ5sh8wMmARlIRUHbG2S9/FhkDayurbO/s0PQzYrug\nyuuICx1bm+fS/53n8KGDSOVY1oayt8z2+bNEb5hXXUqFGk3rnk6X7rtvXXI0e3rsmxHESAgFX3nq\nHKfOnCNoz1j1qbTn1HpgmD/Ikt1hPpS44S6rKjLIBGujktPjuxGxR8aIkRmCU8hC0+JxQrK5O2Yy\n32Y6d7Rdi5ABxSn6xrCxvEKhJb380jyTcFtsHFpFj/o0s22cgCYKxu0c00s25WLPTl0K8JEoA0FM\nkSiG2dV41/Jj/+ub+f33/i7KBJzdYnO8zdb4Aibr8atv/01OnbzAT/zYD7LrxngnWF1bfdY5+5wF\nyxMnT3P9DTdQDgZ84yu/Gx8C993/Z7S15fjlR/n99/zBQu8k6DqHx5L8Wix2XjFvG3qDPrPJDlk+\noK5bXF3DxXd8IWnajrZpiUowlJoWQZjscNx5JnsTRUQ2Vwa89hffxp2/+du0H/sYVjasCcnpQuEQ\nfOQn/zUHXvpSzu8+yGGdEYRBEAhCEAwMwxRxviIoi2l7RDLCza/k+u/5Jzz46/+e7N57mGpL2SUw\nXBMrSmVAa0oHVsNMBhAGHRVCBI4fP8oLrrsCJaGfl+zs7OCCp9qcUBZ9br75Rj720dvJ9ADiwrBV\nBmQCSxHCxccyj9JPp5VTEMFoNGJzc5MYNFVUNCIilCGrPLlSnDs959D1c6bmAmZkaGgojEa5hhAd\ng0FgOpsx3arZPPko12wc4Yl4Ad8LmLU+Ki+ZCMWjm+cZz2qMEpRZztGjB5nUu9idGQdHlwpgZbXF\n4ycfZ+OyFR6+/wy11wgrEGQEH/Ex4G1INh5aMZu3KJ3R7weOX3Gc4BW//o4f48GnbgFjMYM+y70B\n4x2HazMefupByB/hxIkbePcffYh/8Pdfi991bJ7ffNY5+5wFy4kTV3P8yqv5xCc+yYMPPsqpk0/w\nQ//k+zl/+iQnT55mNqvo9XqIIIgxYLsksmyrmrppWN1YZ3c8oZ23ODelbR3WtnDt0z/DBehcQOU5\nMUQaEcjmgDPMTSTUHkwC1/XngnFjuOEVr+SOz3yGLgS09VTRkSFZKyThjvtYKQtM6NHqjpiqNEQv\nF/awHmU13jQQW669/uuogma36HEogBWWXjCY0FAJjY4FLgiyaUPZL9jWqZHKEdExgeG0jnTWMx7v\nsr40YquuyPOS0WAEIfLqv3sTt/3FXakPXwUKndEFj+va/R0l3Vsi/X6fpmkWPTFJ3Tse7yadlPXo\nQoMIhC7SmBYdSp58xHPN1Yq8DEi1TSn6OK/x0iN8ZBAlcxeQTc1y61nrItsjSe+qPnGtA7nLss4Z\nXV4SwoBev4fODF3rsVi6qs+ZR3YumRuxahkpSdnTvOglL+TkucewoWHWRro6QAhkLunj5rtzpMlw\n1mKdZ2NjhR/9Z2+hXD9LP78MpRRnNhv8WkQITSCgdM0H/uz/5C3f9k6EX+Edv/5u/ukPfBczf+kx\n/pnjucO3di20nuuveR6rqyvkX38TD375LmaTMdniyLW7u8tgsPQ0qjV4qqqhaTvq9gJVNUcKiVtI\n1xN47+khtSI0yeVXCYmMkkJ1XMgtq0S2csG3/+i/4P2/9HYGfsyXfuSHgQg2IrXHeIMIEi0ColNU\nWUXPGayoaYLk1T/5s7hS8ic/878ztI4sKBoVMWGIjp6HfvtdbI/ez+rWDo0M5E4Srns+R190GY++\n58tc9xs/x31vfSu9kLHU1GwWkagDhhwhHf3+kCha+sOcyc6UaZQcuOIoL3nlyxmNRhw5foQsK/iZ\nf/XvuO+eJ3A+IFwyVXXKECWp+q8kSknm8wS+Tlbckkyr/VpMlkHEYbShix150We1v8qrX3EloT5J\nVE8i85KukjRzC0IQWmiaDtloZOs4cOzr2C236B1zLI0qRoOMkJXM5g2j1bSgeNUh+gUxX8O2AR00\nV15ziFvOfmn/73bv2OGDQF+omO3s4jJPPkgNcHLeoZWkDQ7hEowxdh2dr5E+4xf/w8cpS4j+ALOm\nYXPsmcwV2+d2cF1ASLji6gGf+ez93Pqxb2Jt/TjHLn8Jx68/yBNfuTRonzmes2BZXzvKV+5+iCOX\nH+GBB+4mkxLnkv9GFC1SKrRWyZPDOZxzyShUmOTX0XX7JMPYdckp9xnB0jYdRmvqpkEpCc6xpT3Z\nZUv0H79AT+Scvupajv/Q98Cv/Dd2lMOKSHBQBMtEe3TQBCWIQmCEhugIQmDJGPcN2WCJVhVopjSi\npt8mpXcnLWXruPJJz7g3J6dgmh/kJT/844x1R/zgFxGnngAME2VoRUBXXcooLWolV1x5lNe+/pUc\nOnyU5bVVOuvIyzJ52xCYzyYUWc5/+H9+g7f9X7/AR/74w+BTy3BZZtB2+BAWXLWIEKkrcb/DfGEC\nlWpVix4YF8lkxjfd/DIOH7iCXDa0taA53xHzGd61uMonkEQbcTNLZiUezQMXHufqF2asLWsGKzn9\npZy6c6yu9rE24oUkCs1cJP5xr7dE1U4Q6hnW3ja9R0RA9ErausY3HjcOFIWgMElN0TUWqQ3zqkbo\nEm8lPRPROhBbSxfSMbmxlsmFSOwyfGzpDQPTSUXZGzF5/Axnz93B4SNrPP7AeZ5tPGfBcv/9D3LZ\nZYfY2TlHZiB0dqHTyZBCY3S2sEtzqUtuYRudXKWS7qltnwZKd12HMZeqRqVImE+tNVmeU1VT+mh2\nDg9R44b1KnDHj/8MZj1juVDQOISMOGnZLZc4ePNLefjTf0HmWkTs4ZMiD+0dB/2EL/zs/0ZjFEfb\ndP4PWjMVJV7mjLQjdjN2+xbdWRopMT5y5/s+yPpAo+KMT/78L3BQ1vRe8EIOLK/Q3vbn7IiQahal\n4B/9wPdw4OBBhEj96LmBaVtRFCUxBnqjZaLz+NDyoz/xz3noK/fz5EOPYazCWcvQ5AQl2K6m+xd5\nIcR+4TLVXtICoxDkRYFSire85bsJXYVGElxBUWxghObs6Udpmm1KUWAbu2jtzvGdZB4DXR7IDiiW\nlmB1LUfnisIbrA1QKKyPNDbQOIt3Hc5PIDbY7tJJat2CKyA15MvkCupqewEql0Tn8Ca1mPk24kQf\nomF99TK6+nHKfkApyZLWqV1bCZb7FhkVQpUoA4hlvCvxUTFvKh55as7LXvF3uOXWu/7SOfucBcu1\n117JqVOPM1rpQ7BPZ2sWvRlt2yCESDC2vR5sKWmaNlV2MQuebiLA7/m0XDw6awmLjkLvHa23lI2i\n6RVU1x3l5N2PstSeZunxnImK6IWQzhnJFa96Heuvu4nb7riTvk1ixzIoOqDrL2EcHJhLdk2L1wYV\nFFZobvwXP4o6fphP/7uf47L5HC0iUWY0mWPoW9o7Pso5UTNmjiFjrjyxrzm/eQYZBCJTCClBBkYr\nq7RElFwgeqKgNH26xlKWJV09heCZzicsFX1+9Z2/xpte960415EDXdcSFCyVBfO6wSejToh7bl9P\n16WEELz4RS/mhS98Id6DyDNEbNBGEHyCpx840mfrwlm2zpynrUSyNfCezitarVhd36BXnKEoapRs\n0eRkRUEsA3Vn0TEVTvMQ6byjs+1iJ7xUZhIBIR0uRpxSaLXEQOd07ZzJeJuitKyMepR9jfOK6DRL\nSxssLx3lQn0KZSzGOFbLIcsqo/OS6awhErDR0FmNswYvMoQXuOBoq8Cj9WPPOmefs2DZvrAN0XP6\n1FMsDYYoo8n7vQQqiGHfsTaEBB/Y68XXRtE0DXHhhZhniasr5dcAH/hACKmBqLYdpcqookU1lt1M\nMbj6ELuPn8W1DaWUqLiYTA4ev/UzmLWcoYdoM7QCLzzLr7iZq974bdz1rvewdd/9qOARUUK0NMKw\ncug4lR0yD1DrDmKJloIyZNAGfDEh+EBfZVgRqK2hvuM2diUoWl50083c/vnbUViyIqNjRowCGUBL\nQ5QlRSZp6xYhDHmZoBOzyQ6f/Mifc+TEcR75ykPEKHHCE12kUBnlcJkL05qIx7u4uLcsgkUKvu8H\nfhApDMjUteqDR2OgnWO0xPYkKMnho1exceAY7YJBVo2n+CIn6/VQypOXY2JWY2OLCQFhI6LI0DIl\nXIRwaKlpm9TcF5zEi0tNhKIVeNkg9BATQyp4lob1VclllxUMlxuij9BJ8t6IY9e9mNNnx2yebMgL\nie8aulowljlKecp+yZoqiQKC6TGpArsXGrouMBiuceHCeRSC1j47N+w5C5YYLHmWsbJyFOE9jsju\n1vY+2WN9fR2lFEVeUFUVSqlFwETyvNx/bG9Yay+Bz0Ein0gpqKpER9dK0MsLog8opdkdZhQ3HmP2\nyGnM2QlGa1RrKbxiw+1y8t1/hHYtjdTUymFazyOf/yI3vOabyVdXsF3AFxErA7lXDKPg/T/9kwQd\n2QhzoshwImc7BEJ/wNA3iGgxQlL4SCcEXWhQUaFthzKw8+g5FAqnUi1B+wwjVSIn1g33fekTfP7z\nX2Rrc4vz5zeJPlIYQzWf0xmDjYGjV13LIw8+ntLJUmKdxYbA0qDHvJrhYwdi4bYWIwrJNSeuY3tr\nStskcHYVxujMYH0PoUD6Fk3iJ2elRGXJYGi4tAK6Rys80dfEUBK9ITiPwyOMQziJlmmqiRCQPqCC\nTBwBXyfLw4uGEB6tcnyEzDikiBw81HHoqEWomtbVmN6A2bxjtrnDePoZBksZl195mEwuc/5chWtK\nzm97DmzkCNHSy/soZZDaIEtDk4Gzkel4jJSCxkWa6d/SYFEsiIStI5PJTHN1eSWxsELAWsuFCxeo\n5u0lwWMXGNNuccFvmgbnHHmeY+2lF0XnPM61GGOS5EMIrLVkUiXH29mcqYB8vc/w0Do74xmzR08z\nMBmnRcvBKjIrIiYKZg4OSs3VXvDFn31b0hIVHicEvaDx0qJixyE/RQVJVJ5GRZT0vOLHfw653OfO\nt/8S4swDSJeTR4gaGgQqeMBSuwx15glGhWDaCv7zv38HUXXs7uzSzwpCCGxubTGdtDSVJfrIaHWF\naeOo60C1M6ZtO86fPcWBtTVOnUkwcWNMUho0c4ZlwbCXM57VuBD3W3g//7nbeNU3vBoRJds7E+Tc\nIxAgc1ywoDQIhyQdd4UCrSRap0VMq0h0OSFs4O05vJY4HEIojNqT3yh0CGjvEDbQtTUhdOnYefGQ\nAaVTEsPQcdnhHa48IelcS+dzCGs89tgYv93RNYLIBW540RHGuyepK8+RYy/gngfuwxjNzqQhWkMc\ntvQGQKcItcdIRXQtWJA6HeeV7D3rnH3OgkXLuHC4VbQxmRdF69IvXwi0UoyWluj1UgA0dZV4YURG\noxFFkSX6PNAFi20c2lxq7e1lRGcZ3nqG5YCt8RaZNnjhCdEnYIH3iCJjF8Eju1sUaznloQ2iEJwc\nV3SbuxyYe5Ca1hh2bJXaY+PCFru3jDh0BdOtCwzmEwQBJx0yRkSQONnRbNesZIHJdJuR9wTl0T5B\n9jopcBFsNFgBuevo2x5tB6efPI8ZKoKD2XiclMhO0e+NqKotatcwPfU4vX7Bt7z21bzyVa9m+9wO\nH/nIrdx7z/3MahISl6R7MkrjO0uWGQ6sjJhVDfO6BqH5wIf/lE/9xWf4R29+M8eOHWH9wAm2N7cY\nj6dkUtH4GVEoVF7s684ApNLIOEOLgBWOyCHa7jSRc+ihJHSGIAJEhQst3kViULRNR6gjKHAxJBDh\nYkRliUi0gkEZuOYqT17WxLZJkEMEV11zjHtufYTJ2BG1YGfcMGw0KhOYYclV17+Me+/6HL6T+MbS\ndIYlB0YEbJWzu9Mwq1P7NTLSK3u88OY38NFkXgm/AAAgAElEQVQP3PKXz9n/0UHw1x1CQK9XJrfZ\nmLqfpZR450EK2iZ5kiiVLvzLKyOkTIacbduyuzshxkhZDsjLbHEMu/SC733Yl69P5/PU1+I9eZZB\nEEgkzjmqqmFaNfT7PdZXRtjoEFowOL7B+uEDzO55lG635sSRI9QnzyFcIGYa7S2tc7z0rd9PowQf\n+6n/g9VxTWciMshEyg9wz3/5NbzqWOkalM+ZqkAXPbtCclY65sHTy3KyCNFIOhFokZza3qWcWbTU\nlGXJkUOX0QXFdDJlaeQp7YjgA+Pdc/zZhz7K7//hB2nqgPOgZI7QCpMXCS3kFyA+wHYdzloG/T6Z\nyZg3FmcddV3zrv/6m4wGA77l9W/gxhfeyPpan7qqubAdaa3FRUX0DhbuXsl7MgPvkUYiYg/RHiMI\nS1tNMRo63y48RMEGg4+RLC+IUbA7nSeE0kXBopSk7PUZDg0njkV6g3NIodjor1NZgWwDs3aba1+2\nQjPXCJ1TDgST0w02dpx57CyDpcMUvWV829J0FqkkIXbkAqpZw3g3Musi6IxipFheuZFKPKPF4xnj\nOQuWLEu/nbIs96XldV3jvKexHb1ej/l8jlYLR2LnaV06kvWLkkHZo+06kNDUDW3bMpte2sudS0Ug\n0HUdUis6Z9FCMJnNEkxagJSKyawCKVjfWKVQJnmGAE0XUVlGcf1VNHc9ygNPPcUhmWGUIkSHkILM\nCj74f/8qL3rFyzBNg5UO5ZPeLEgIPjIMW9Qu0klJqyVzItu55JyMnPSWLhMY32C8IDgPSmP6hsHa\nOkc2hhQm58H77uOWez7OvNOLNLlGSJM0YFKgY6SJCiWSetr5lKvQWY7Uhmb+9CVaCAExUM+m5GXJ\naFBQ1xZJIArN7rzlj/74fXzozz/M61/7zbz4RS/mxInjbO3usr0zI1oWZB6/aDg1KC2QMiP6Di2P\n4zpBxyZN3EQUJYgG4XO8ULTW01nFfN7hXImPlx7DlpeXkTJxzXrlDkVR08tXUSan6KAw0M8d3bCl\nnmsm0watlxgdvgzfrbJ1bpezJ8+yvHKAJx59mEGWAxrnBU1wdJUjxB7IiCwGkK+wduQmmtB/1jn7\nnAWLlDJ1z1UtKssWl3GZLJ2Voq5r8jynl5lFKrlFi5SRwSdqu4gxNVplhjIz2Gf0cteTGUWvl2os\nbUtW5MxnM7RQdHVFnhdMtndYWhqRFzrZl8aICAYpPaGxzH1kMBwQVpdotsaMW4uWGUYIHFDYyNr5\nc5z88B+zZDukl3Q9ResdndJoCy0CKyIzGZmJQJCas6LjCdcSVI72giCgUwInLFIahIU77/gStwmP\nVgrfWIIL6Mygsz5t8MgYFhBKgYsRjSSGZPgToqcNC56yUCyNVuiqOd3+vS7B5WzTIrPIoFBYF6lt\noHaOUpS0uxXv/+AtvP/9H+IN3/Z6XvKyl3P5sRVmdc3Wzg51XSfWmlCI4EBZBAahBDpcT9OsMJ8V\nZK5GZpsImYqEXRCpLTpqPNDM1+EiDWOv7CNVMl3q9Qy9XkahwRSefplT1hnLITJ3JTbXFAjOb3bY\n3ozaO/zQQOYIUYEssF5Bk4rZifxZYEyPPPN0eo0TV72Wshjgnnl3esZ4zoKlahsEirwcYNsaLZNB\nZowRLWHYL7Fdh/eWEAJFkVHXNVImV13nHc62GJN2qRgFUlxalCyHJfNqRt3M6WzH0miIWUhth0tL\nzHbHaCE5uL7BdLqNUoK6nmN0QRcizgdsW1EFWLpyg9C11GHO+dDRGy5x9Mgxzt79FVZiRFiQIuJ0\nYBYzXv+2X2bWOf7kp38CbRusi1QiMtWaR9qGcR6xWiKix8k94ERyFQ7B4YHdabKv1CqQK50kKSLt\nPkomp2cUONcldbVMzsJhgTXVimT4JCVKCJbzFaaTKU3XJPffVJ/Edx2dSiwtrSKFU9RdQxCS2XyO\nlIr3ffBDvOe97+M1r3kN3/ya13D15Yep5nO2treZtQ6vFUIaoooEmSDho/6AQJ/J9FG0j2SmpovJ\nhrtuBVXXZzobUeTLwNPo3f5ggJSetnIYE8j1FrmUaCJC14h+sgnMrWReB2ZlZLAC484jyZHaE6qa\n3PToFddQzWYEBXlskjVeKGkYMhiscuTQlehsQAjpPvZs4zkLFgiphuI8hIgqDDKCW7S/xhiJStM0\n1X5K2FpLjG6fWqmVRghP1yXruGcGS9e0aK1YXVlNF++QvrauG7YvbDEoSg4eOMDuzg5CekxeYLLU\nLOVCxMeIyTKEEGxOJ/SvPsCgDUwfPcXubJeTXxmzKgIiM6goUU7QikhnCp588DHuefhBmtZhVGBW\n5jwVLGeqmqY0NARilAmWt6ishwUEHJLL12Q2ZXV5gJYKSSqsLjKwRB8RCxewxP5Nv0+lEis4iSXF\nvoe8dYG8ZyiGPUIFXd0gL8Ioee+p65SON8ZgtGE8b2jqBpUZvJdkWcYnPvlxbv3oR3jxjS/ida97\nHZcfOYIXipNnTlO1LRIJIkOqiA8tg9FljEartN0O4/FpNs+fZnuyS5AFK2tX0R/2ybL8EkrDyspy\nUpfnmugmyb1aO4QIBKuJ0SJNhs4VzkvKAqrWYoKnCjVYRcDSeE82uIJ8eYAIks53hBAxIadUGxg3\np4w53YUZxeUbxPi3tJ9F6wxnU5pYGU1b1QsZC7iFuG+PtBhCoG3b/Sp/23YURSJxRGA+q8iykqa5\ntBLsncPajmI0RCi58GVMK/fq2iqDvMRaS/CBECPNvEJGjQsRgqdtW8oyfU2Wl8yEIxaCK296MQ9+\n6gvkQlNFjw2O3EpykppXTy/wuf/8NjolGCjHAz5y3s3ZUpGqr5P7VEivPoiYGAIX0eHhaVp807QE\npwhSorQgOIfRGinVQkOWjq8pQNIiopRavPZiXxrkpV9o5BR5v0cvL9jdTsJBIQRGiKSfI4K1BB1Z\nGpRopZk2FY1vSQEZKDLDvffdzZfv/gIHDx7in/7wW7lsfY2sKDi/dYGd2RgpM7oYkUoSXU5e9FnL\nDzFceX5atJzDxgEej/cWLrpu9ns9DJFGS+paIiiI0RPDAvvrOoJzxKgJUaJVhpIeGR0qKqyVaCSt\n93Q2A1YwWYbMMvLcsGZyPvPxD7C7+TC5chw4fDmvPvqjqL+tnpIhJkFf8CGBzxZSchcdxHSnaZrm\not6LNJGEUCDjIvsF1nb0ByOMLvatKvbGoNen6xRlUWB9h5KaST1DKU1mMpquRS2OdN4F9u6ZYqFk\n1lrvF0l7ZomVkFOHji8129xbKHptQHUtAxlYNZ6+NAgEIY7wtJxp5mxnEGKBNRKPRcWACBLlwCuw\nkr+Ug5jI+REJ5IXEWY/JdYKvioX2bdHkJaUiXiSMfLqIK7HWpqAxGh9ShlApycaBA4wnY5qmIVdp\ndyImywpsJOpIr8wwxZDZfEbbNYsFTCGEIssMF7Yu8Iv/6T+yun6AN77pjRw8dIi1tTVmdcP2dErd\ndkjZ4oVE5QP6eera7IIn6B7eB7q2uSRYyjyjq6bkqoedjwh+B8ec6CJSdgThkVoQfMA7BU6Ab1Nb\nuMpQQiGCgBqEjXg6ApaslzMq+tx/+ydx2ycZqhal+6wfeD4ipKP8s43nLFhsSKu50ILOdjgncEIh\nUMRosU2L1hql9kDhyVDUuUgUHutbQCFNn/FsxnAJdHYpBV1KjTKauW2JOGYLJtnRw0cgpABtvUvC\nQrFnfWARSuGcT1k40ufaWQVlQetDEh02LvWNiEiDYleJZKYjwNKg0MiiSAtBDIgAIhqChCg8waRd\nUcZkifdVmIIQFxbXEaUDNnqyLKIWNgxx0RiXK4nJc7y1dNEioifLsiT/iQLrfLpLiNRvExEgFFYI\nbIxkSyNillPPZmghKKRCBU+Iks45gnPozLDW71NpRd11SCVweGIQyBiZ+Yrq/BP82jt/haXhEt/1\nprdwxRXHufLgIZq6ZavapfMSJXOq2BAUibbiBRIw6tL6WK41WV/hnSPqQ2xtbTMatBjjUTokTZoL\nEAzBBoJr0Ti0zBFiTiZLnMsJdgmvesmWQisyDI/f/SXOnrsTKSNKpNT/9S/9eoIu0PLZuWHPWbA0\nzaLRJnqMyTG5QUhF09SEBa+3bi29PEeJlC6VRGxIBUyhzKIprCPLMpq6/SoI+nQ+BiWIDqq2Yjqe\nc/z4cbqmJdOablH9B9DG7B/99lpy94a1liwrsN5RFAW7kynBSIILxChoY8BKCcqkFT8mly4f9gSQ\ni6yUTDtCCAvPSPiq3RAWiuDF58MCWC2lQQiPi4lYIoVM3vTe4ZoGqZKFXCQVWiHZSuxJgkJI3xvj\ngrYfIgsBMmiNV4kr4BPNAReSE1vwLlFxIhRZRlZkhBCTO/SiMBlaT1SpwDcbT3jnO3+DwWCJ73zj\nm7nmmms5vHGA1gUu7IxZLpaYNg1d5/Ehoo3+KgNWIQ3eFZjcIQuB8M9jXt1Llp3CZGoBGpS0tqVx\ngtYrgkwsNYkmBkHVgotJilPokv5wnbNP3MtD99/CilDYYLE6Z/3ICWQ+IAaF+CvI4M9hUVKRZZKv\nf+U3cOOLX0ie9fj1X3sHyIxZ1aZJvABjC5Hk2J11SJURggMlyPMCt1DgNk2DdZcaeXo8Uihmdc1s\n3rC+sUFd13RNi9c6+TAujHFiCBRFkVQCMWKMTpPCpRaBrusgKuZ1xdbuGLdQMzvCopk9ketjTIjR\n/Z0iLly4QyTKNPn1RUck4GveVfY+ThKd1MvTKzKazqNJzl6CiBICHxMD2QePXiRHQggonT3dhw+4\nxeuMLmIXRUi78KE3RpFJiZES7xKLLQaPiiAjBOHT40phpMLkKd1vbYuUOhWA8YQIAc/u7ha/9V/f\nhVKa73nTm7jhxhdxaHUFH9OkO1dtI4XGB5lUzReNgEHnQ4R0KO0oGFFPK6oGyr4nLzxdmOMD2A68\nVXTW45yicSWTbo2g19D6ICbrU+YDdk89xEN33cpSptAuEkWkMz1e9A2vofYBJec8cveDzzpnnzMz\nIyGgszU33fRinv+Cq7n+hhN855u/De8agguJuOJJK3frUh++D/jo8USs9UwmU2xrEULSWXtRDSEN\nk+fUbcdkOmNtdX1h5BpRWqcs157l3MKn0VpL27b71nOQJqzzDuc9znlAUM2r/aIcJC8UAgTnFyA3\nkR5drPSI9MfZS/nsBcTXMr+9OFD2jmfOp+a3uukYLg2wziKUJAiBExGZ6X2H47TIpCMrJM1TCDHZ\nxvlA21jmVQ0iMhj06Pf7FEVJMRggM4MTApkbvILA4mIUA0qKxYce27V41yFEJM9NcuNSCte2eGuJ\n1hGjp7MNTVvxvvf/IT/1k/+WT3/y4/imYnVYcuWxKxgNc5RIBhgXD60zpDBIUYIoiTqjHF3F2sbX\nM5tucOpsoGoz6lbROkPjMjpfUMclxtUSjT+CNCcYDPuMRn06u8M9n/8QQ9kho8aKiFeS4eoRVg+c\nQKoeF86e5CtfvPVZ5+xzpzqWnkwbbvnwh7n2mrfig+fmm2/i/i/ezV13P0bdRpA+EfSFoG1bsizD\ntvP9k0uWZRiZs7m9g19AUS4eFzbnVLZjZbhGoQ1N1eC9J89zqrZD63SRV0qhFl2Zo9EIay1uL+Xq\nXarj+EDXWVAKFy5tBwjRJ4RqjCipiC4F4V7YOJGcrKSQ+8ewvUzfVwWMfNpgVQAhWhCgTYZ1FrzH\nSIHwDqkURpsE5Igx9fXkhizP8d7jnaRuWqbTKcPhgJXVAXKYUdcdzjfEaFNdxAX0bIosc7JMo4Qk\nK3OE97TzBhc9uQct91gDQIx47wgiCSqlzlB7wRoCkZQoiBEmVSpW3nLLn/He976PV33jq3jdt3w7\n63mflb5ia/vSdl6pCjwtUigEji645H6cjVg9OMSGHebVFuPxU+zOO3YbyW7V4eURynyDPFtCqREu\n5jRVwxc+9iesaIm1NVZ1KLXEN77+O+kfOk7tMqrZDnd++qP06r+lnZJ7KM3HHz3JbHfO8mUbZEpz\n000v54tffhjvPUoJlpaWkuxF61TlN4v0JlBXNZ1w6UhgO/JnXPBF5lnp9ymy7CJQg1w8twIWrbTB\nAwpjSmazOUVe4n26z+yZwOYmS5N7kZ0LMX6VISiL/++bC3Hp3edrIYn2+QIXP3bxexAJ2NE0DRvr\nq/vHQmChnVP7X5dlGc45Tp8+TZ7nFEWP4VKfojSXtGbHKNHa4D30e31ErFknUnnPZFaTFTn9rESb\nHCEMbTXHhZRRw3kMe5edSBAQnE3gOxUx2oAoabuW2nZEUherlJHpdEKeF9zx2U9x+2dv47prvo43\nfscbOLS0cunkCBqpDHVXkwuN1hkhBhoX6VyPsr/KwFyJC5czWCtZ7gTTeUcXM7wXBHIQOacvTPjS\nHR+mjDXW7SLzHrF3hH/4/f8TtetRNYHoGz73qfezmltOnd7m2cZzd2eRyaNkMm753f/2e/zjt/4Q\nwo5pmym9Xo71DbNqQtuk7E6ep0xF6+r9S7lRJvkFNi1Zr8DZS+kc83bKaHlAPZ6jVE5mDCH6xR9P\n4v1e/4Kg6yxSRLQ0yQFqce4PIewTUrSUtCGQfFBUEidyabCEkM7ue81oUkoWlUZgEVyLaBCRr8En\nvnjEfbZXU1dIuc54Z8zy8vKCk5bS63s1KCEEKMloZSV9HAJtW+3vguk96pQiDQKBoq5btMqY1RMO\nX3uCA4Xh9JlTeB8xUlKUPfr9Hp2bMz63RY5CKIGQkUxKfFwkIboWqdKRKkpJkacaT9W1ZCojehBK\n4l1NppO/5wP3f5m3PfQAhw9vwM1Pv+vDGyNOX9ii1yvTfUxlxODpgiPmksqnv4daPgLSUFoohoKq\nbvFREaLGBs3tf/QHiOokZZ5hsw1e/sq/x4nnvQoveqA68szy6T/7U9z5x9iptijM/4DUsRBiGXgn\ncANpWf9+4CH+Bs5fIsYkljOCk0+d5C9u/TiP3H8/99z/MBd2qnS3EJIYHLPpjCzL9zliZV6AFNTz\niuCTh2FVT76KKHhg+QCz8Zxe2SOGiPPtYmVd9J0LuaijpEmJFMQI1js66xGo/dXb+6Rrss7jg9x/\njq+5swhHJBVYESFlv/jqXeSi280lr3tvdwJBXPysZFuhKLMBs8mMsixxIZDnOWXZS8EpFlm9xc6n\npdrnFBiTpcBXLBCr6fglZFIOyINLPLVznkika1uMMEijCAGU0HiRM1hdY7q5RV+bVJKBRe+JxHiJ\nVEmsSghJ/SAkpTJ4EWl92tWyIicEl2iRSuKc5ezpS48/pfI87/jlzOuO87tTHFAFgQyKTGUEWiyC\n4DOE1EgkymQoafEIfND4AP/zW/85d972Sba2xtz0mm9BiBIR+8ggkbrki3d+gp0z9zIMMFeJePNs\n46+7s/wS8KEY45uFEBroAz/J38D5q2tbnE3Qg82tyG/95rsJIRIQSKMRIq3s/V4Pa+2isUsiF/cX\nKSVlWVLXNU3bIoyhe0ZbaPARowzW2sUxLuxXtJMMJE0W7zxC6cVOsrjDyFSQzLL0b8qOpZS2DyHV\nX77GUQsWgOnAvl/9xfzhi0e6/LP/NelSzz4IL316wSYTge2dMcZZDhxYB5Jjsc7SjpsKuiFB6Lzf\nf23GZPtylj3py17DV5HlKCJRJlSUkTlnz5whzwus90hrUSZNaES6eHspOD+bMRwOKQRIt1g0RGrg\nkyoVRwWR6BeejdKQ9TIa6+iaOv3+CQSfkiu2u1QA+zM/81N8z1vewvOf90KuPLjBrGmYVA3TpiOi\n8CER8BUiscAW71XJPiDJMkPjIsEG/s6rvh0fWhw51idoho2Bh+66g7u/9EkGqmXeOJTJvqrT9pnj\nr0PRHwGvijF+H2mCOGAshPgO4NWLL/st4OOkgNl3/gIeF0LsOX/ddvHz7h1RrHU43aO1C5aCDEhn\niSEFi9YCqQTFwmimtc3+ZN/rlqTuqDvPqacu9lCCqq6wXbfgYmVkZUGms5TmFBEl09FFqEiI6cLc\ntHUCbovUi9513X56WYinJ6LYP8aJ/SPQ3gRPqVizHwD7E/8ZQbV35wkXBd0+bjU9kJqTSDtNUWSs\n9Vbo9UpcqBGiwPmwf7n31qb3tHh+v8gO7mf8YlI+GGPQcrFjLrpO9WLnXt/YoK5qWt/QNhEtJJnW\n6BhTrSmAk5rNyZw8BtZ7A2Qh9lPhMQQMi1qOXOzcdEiVUxhNYRRNZyFEJm0FQaLVpRoG6zp+53d+\nG4HgO7/jTbz0ZS/nivUlmqg4P50xmXZkOgVJqtorpNAEUrofJ1DSIHolzlpQObGxZFmPzrY89dBd\n3PaRd1PoDh+7ROmn+JsHC3AC2BRCvAu4EbgT+Jc8u/PXxYHxNZ2/nHP7cpbOukTBlwKtDFqnImTS\ngSUSpbUOSEAJISVGKYSISSncNJw8s4N5hnNTv1fgFtXsuq6Zzecwn6eMWJbRywusdQvwRUbbthiz\nEHHGgBBP45asDbAfNIve9cVu9Myj2J5t+f7Os9g+9o5deyapf9nYP64B/2975x5r+VXV8c/ae/9+\nv/O4M/fOvTNTSh9UqBVINBQCMTwsihE0hliNRIkGCWriH2A0QSyGSOIfNSAhRmJI8BFoCMYgiTQq\nARKQWqQ+6NhCX7ahtNPOszP3eR6/3957+cfev3POvTOdXsvMvZCclZzc87rnt8757fXba33Xd60l\n5C4sJi3AGAOj0QBrlYBiXIHAZLcNISAkMqWHScyV9LUTow4ihDqAQJmJqa371+mWDDfGVJ00DTn6\nhqJjWV5e4vHvPoFoiVVPQ+DJrQ36I1joVQTfJN6aKjaXBxQuDRAa1VuocWn0n3UEa+j1+4wG9QWl\nFVtbG3S6Jc4Z/vnLd/K5z3+ON77hFt74pp/mcG+Bq5eOcvqZNdY8xCZ5B+Bw0kGc4CMYKRDfEGMq\n5yhLZRA2WT11ggf/9R855EaEYIjiMCkRxrMTj5LsJs/igFcCf6mqrwS2SDvIRJ7P5C/n3CSoTVdr\nSbyvqMQ6YfWlsWm0QPQ4ZwCPCxHjPbEeoqo89fRpTpx4ml6lXH/N9mnFxjgMidJeWEe/6lGagsqW\nEODp06dY3VhnazRkMBpleo3Ni73JsG0ACSgBHxrasQqzxqAwiROipt0nhAQkJEk0HoMFTXHANhfO\npJvK9h9K8k0lEuoxPQTrkgn5oDnBGFJziDo17Jg1iJZU2RItJSmKIbtdeTS4NoEYhNGwYbA1Bi0o\nKBmub+X5izVvu/UW3vPudyXOWRjjNeCjoKFg6IUz5wcM6kgjltpCkHQRanzAiFAVJSZKdtc8EpVC\nlcNLfZYObL/IFZVjY3MTP27wW1tUhXDXXV/lAx94P3d88hOsnn2KpR7cdM1Rrj96hH51gKLsg0ng\nhTMOQsSopXKKs6DxIPX5Ne6/+/P4zWdS8tZ4KmfplhXGCE095lKym53lOHBcVf8zP/4scBtw8nuZ\n/HXswYexxhJiZGVxkaMrywmlUib8p6ZpMGXqmj8YDBARbOGwRtgaRs6ffIazZ7c4fHSFQ8v9C0xy\nNB5hNBmjcw7NAT0kY10+dGiyqHzwrK0Nc6lyl6rqADJZ+M45fNMmIdmWJ7kQxZrKztdlkqncARe3\n78nZdiEH7ChIxFlHv98Dibm82k8Yx60blup9zMRtbGMT1cxEyIdrDT0CGhJ5snUl26YetizY2go8\nc+4cP/7qH+OpU+e447Y/QkPi6SUIPRk46hAnjBqP39qiQ0FvwWBdDzUVsUkdVLpFSd00DEcjjEvF\ne8NBPamabaVXFljtYDQyHg4RH7CuoigKvvWt+7j/vge4+uqr+Z3fejfdgwe5/sgB1gee85uWorAM\n64gaAxjEdqmDZ7B+km/c9UWOP/YgfZvYDpV1bA03Wd9cb7NHl5TdzGc5KSJP5iD9EdJMlm/n2zt4\nnpO/XnzdC1FJiFRVdCZw7jgzgXu93uTESU521XXDgX4HXxvWN4acX93iyNEVFpd6qI7RsH2jdKaT\najY0EENEXFrkbT6iwOBVUx2MEei6CbVlMDiPSHpvQsMC/X6fenVtEofMZvl3/Gg7HqaGEe0GbOw0\nD7PT4NJfzW6BmeZt8m7UNPVk9wAmAXtRFEiYGnAbN7WPnXNp126TrTmemcRUMnXXgkaCgf7SIlur\na/zb14/xzWMFMfdvgzYOSzw6LyBqE93Fp0rNl15/hGuOLPPV/3qA6KpEDA4N1kT6nZKxT+fCiaSp\n0TPSNZZut4ePAR8idYgMBwNM4bDWIKKcOXOK22//IAcXF3nHu36bQytHWFo+woZveCaM2ayFoB1Q\nQzMc8PB9d3H84XvoGI+LgooiqhzsVCx0jqCZonTy9CmeTXaLhr0b+LSIlMBjJOjY8j1M/qJIvmzU\ngA1hsoC7nVQhORhs5YYWNXUO0quqoomejc2aM2fXOHz0KpaXK8YjjzWdCziJdd2w0OtnNKwg0OSq\nSs1BruBCchOiKj6GSYZ/YeEAIcQU6+Sa/fXNAZ1uLy/06eLe+fW2P5dau7bPtUnGdpFua3ZHuxO1\nn99uQUKv14MIVVVty/+04r3fNuatfa11xZqmwTDVoa5r2ra4MUaitrU+Cfb1ViEopujgfc36sMHE\ngNEA2gIa099BFZSEcplOwdFF4U03H+Edb/89Pvkv9/O1f78HHw3jwQABOpXDB9Lgpbg9Zgl1ir8K\n57BFSaFgisDGxjpiIoXrAwVR4PQz5/jIn32ExUMH+I23/QrXvOQmDh5d5PTqiPVBzermkO8+/AD/\nffeXWTBDjAQK6WKco65HgNCMhwRN8dalZLcDWP8HePVFXnrek7/Wz53HGEtvoU+QgK9rep0uEj1l\nPy2I0WhAx1pGwwFVp0dRdTn+9AnOr57nqquO0F8oiN5jUARNfvyMCJaNzQGqSt0MwdRAIk+KCCpu\ngg6lmo8CnGXc1DR1SnAGiRw4tIQfNmBr6nqMUKKkdj0T5m2uIVFVUIvJC7FlHM9qpTGhU0Z0YniT\nHSTHLZp6rBILxapnZWmBsa8pKBAxGJ9OHc0AAA0jSURBVGmPpcSQjAKjGDs1AN8kXZo60jQ+I3Tt\n7mMwGd2THEfMQs7aSNZVqX1kHBv6zuYxgpbWTjRTnVUDzgBqcDbSLftoEPxojZfddA2/9psf5t6H\nTvJXH/s49fqQuvFoCHQtDHfylKwjqBJDRH0ynIOFo7d0kFGIaLTUdUMTElLZ6Qjrq+f4i7/+ON1O\nn1tv/WVuuOHFFIOac6dW+cYX/p5+2MQIWGPxcQR12weioSo7NMFsa9p4MdnHLvqHqeua4XBIKAp8\n3SAK/W6XKIJYQ9HppuE3BxY5t7rB+TPPsLa+wcrKClWnmwmUHmMso7Gf+OqttAswQc0Gn3MoIpm7\npUJVdVP1X9WjHo/TfEHnGNdp+GfZ6dL4mqIoKQtH3UA8u7ntGNMZKOkKafNC3kl72bmDzD4/gZ7N\n9LEqaPAUZUFpHczkSmahapMh2qIsaGqPl5inoIVtQMQUcEiI3tg3E5QsRvA+Tt4vGRkKITXMGHlh\nMGroli6hT5oMRaOCpstVwGAJVCZiCbhOn+Nn1hjUnjBc59WvejlvvPOzPHL/ffzJbR9k68wqSEG1\no6F7+5u236/tNtrUNWVVgRoK24HcLNFKmhs6HNd43/CpT/0tqkpZdjk3bKhMQGkSFSZKzp+V2/Je\nzwUbwz6yjkPj6VYdjhw+Qq/Xm/jWw/GYwbjm3PoGw7qh8ZHV9S0GozGnTp/m2muuZ/nQVQgFwUOt\nkSgObMGOjWVbMlDawrKQBqwGrzR1YDSsU+O3xmMVQu1RH6hsQceVGE1dU9qTl3YMsy3/AdsNs12A\ns7HIbIyyDUmbeWxMgsbb/xNJlJilhQMYZFIz3+ZNWqQLMlTsQ0baUjfO1tVqx3a0XLjgA3XwNDES\nFOqQmMmqZIOXiavXvr+0FcZYxiEwjoFowKsSDIkh3iJwUYhiKRaOsMYCT24IVdkjjEC9J6rwohte\nzB9/6EO89FWvZFOheRb3x1mHEUOn05nu3N5jiVQumXOnLHBi6FYlC/0uYmBcDwnB04xHHHTKgjMT\nuL/Nzc26yu1F7lJADexrw4p0xdgabGELx+JimmSlKjx1/GkOHFhMvYQr4dz5VZrGc+ONL6VpasZN\nqtcfjkZgzQSy1bCdoj8xFkmN5WxRYjMkJKoUVUWMgaJMV20Rhw9KURaTIHo4HNLvlwwGIzAOLwYr\nubyZNu5oyZ0zmfodO4aSi9aQyQJvk6vbELMZ42qHDlVVhWqkU5YMmzoHuSbFDEZoxskojLNE72nn\nsLTwdbtjNcES1KMhtyyNea6NKsELgmYwZfo7Ji6bEmolkGK7TlUyGAzpdDo0scE5M0HXBMWr5dHV\nmkNjxwtf9BIeefgxPvznn+CR754glAuM17eIGeSoVTFhO/MCYi6TTjVNMWYELqTGJGnhB0RDLrGW\nxOzwqSHh4kIPn11T9Q2bjUJmFhRFRQgpId32PUh95S4f3eWyS1VViQCIMBwM0wyVsqQoCq67/lrq\nsWc0ajh5ZpUQPFcdOcpwXCMmZbcH4xGuaEmBiuT6yllpYl6EChgLJs2M75V2EmuE6PGjcSJZhtQY\nwwXHgf4CvvGJP1Y39KuSaAxNo/Qqx8gr07EN0yAX0sQxl5ObPoQp2XIm+daiXhFNbWBz32FhaizW\nWlxRYsqCqDGzeEMa9KOgakAsMQc6qYlDpK7T4htlt6qp06SuoFPGgYlKyAwJABslcd8yZcjm4Uam\nLQZzBquptr1pIsaVjGpPEwKOBGcbIt4HOt0l7nngcb709f/AWkNRHsIVJrW+sgaKTmINqiaGxo4L\nuheTjSNmur+B3M7J5AtUwuJykjqfg661qXuLNTQaoVCkKLCW1NrKx0lcYvOguACTXthcemPZ/7Li\nEFKte6fTIYTAuPG4okOQyPrWEFcWLB9cScVOIblBIppo2zFizJT3tFNaWNWY6TAk55KhpLkuqdFD\n6loCMRrKopez5KNJLXsIAc30XxFhcXmZ0xsnErAgszGGbrvNQssiktBgSVC1sp3RnP5fJrmT9vO6\nRYVEZeRrCrHE/P6oIZXBZhRPQyTkC0bLQGh8NlJjMyKXdkBVUozSTHM+knfDJmR3UhWRIjXty8m+\nKcEzGfvNN9/M3XffTae7gMaQaPqqPHF2lbJ0uPIQxloiSpMX6mhcp9imSAlo33hM3L5Ky/5BjEaG\n4w26IdUIRY15F0kwv4bUFWc2sW1camYSfIPBUlhHbBLR0+KQskJRvG8yH9Dg6zA5B3rJvPq+lhWn\nL1iWJWKnJ8BgaULk3Oo6BxYPgYAtXHJyRAhNxBoLGnNgmq6Es8F0K1NOV/r8cV3jjKCkfEGqrXco\naT68swXt6rHWMBgMpt1SRIjGMK4DZVFMGl4AF7hS7YKa9YUT+hZSXKBTt2M2sJTsqrX+eQyRxYWD\naIzpN2pPrqYeYr72tIGaqlLHVF/S7krNLBKnIBnyRRPjwLkqw7/K0sISq+trWFvhNWAiWGGSeGy/\nw2xN0LFjxzITA4Imyr8rpnX/RIPgqHWAKwrqpsYEBSxKjTUGY9wFFa7DJmBiJNa5L3PdUJRlUggS\npD3zu00vRnVqoiEWTDpKu1tYUpVtIq2GSbReOEfjfc6DfZ/W4LeJssToLTGSBuhEhadPnWI0qjm0\nuEKjIeVAWu4S+eTENOs9asx5iQg7fM6vveXrz63Id0jst/+vvPl5/M9zyUV0OcmTV+BAF8qju9Bl\nr6QuOzRxTG9oEOtY2xrxAlegsUFLzeBGQerPGClcCRppmmwbJjElLAIGmpAK0yQGQvAUYmlSFxBE\nEzMaK/hwaT9s39CwWYkhuUEiwokTp1k+tMKP3HQToBSmIITkdllbIMbkOIDUt9GWGFsixmKeI6l0\nUXn8Mn+Z70Ue328FZuTx/Tv0W9/5dm7/2EexZYGK4fxgiBUwCGWALoLzDQ5Nc0hD6vustgJb4Sgp\ncZRS4iQ12NCM9rXupBiIoaFAU8w1HlE+h177N9o7J8CUhJGjgTNnzlJV3VQjPxgwocL7SOowJZOm\ndxO/PkIkJdkunVKayw+K/PDrXkHn4CJNU2NCmKBWpbGYCCYH8FYUsUJVlQxHQ4JRTDBYk4wkmBqN\nEGJys0OIU3RSI+ARk1sgaXL9LiX7Ch2HELDOAoaTJ89w4403srq2NUmexRhxUqGaWMmQdp8WvXDO\n0TQhZWVDIITIa7/wuonPHIJOWMKgJETUbJsEdvypJ3hR+KFJ18bxeJxbIg1QnaInpXOTgD09Z7n/\ngQeRmaxvGzOVRedCSJgZHpi5cPipiLC1uU7/sRWCbzjY77F4cIHQpN+irRfxwWSXNGb0bZrXCZrQ\nqTauKGQKFEzZ3Rl8yNTBNmi3YRakiKyun2XliZXJZxeFncRgKVabAhujYZMheDOZpyOSmBLGGIpI\natfadfziO36dhx55gFt/9Ze47tqr2RoOOby4TKfb5czWGuveE2zg7JlT9IoSIVLk5ubRpIIFDIh1\nFBryGnEs9TusOGGkghfHwEdCtHiJeKOJMhOnuqmGCedPNaJWqcP2svSdIhejbV1pEZG9P+hc5rJL\n0Wfp47ovxjKXufwgyvdFgD+XufwgyNxY5jKXXcqeG4uIvEVEHhKR/5XUFeZKH+9vROSUiNw/89yy\niHxJRB4RkS9KavXUvnZb1u0hEfmZy6jHdSLyFRH5toh8S0Tes4+6dETkHhE5JiIPiMjt+6XLzOdb\nEblXRO7cb12eVXZSNK7kjVQw9ihwA1AAx4CXXeFjvgG4Gbh/5rkPAX+Q778P+NN8/+VZpyLr+Chg\nLpMeLwBeke8vAA8DL9sPXfLn9/JfR2ow8vr90iUf4/eBTwOf369z9Fy3vd5ZXgM8qqqPa2qV9Hek\n1klXTFT1LuD8jqffSmrfRP77C/n+pI2Tqj5OOhGvuUx6nFTVY/n+JvAgqex6z3XJOgzy3ZJ0ETu/\nX7qIyLXAz5EaObZI1L7ocinZa2O5BrbxNy7aJmkP5FJtnGabj10R/UTkBtJud89+6SIiRkSO5WN+\nRVW/vV+6AB8F3gvbiOP7eo4uJnttLN93OLWmvf1Sel1WnUVkAfgH4HdVdWP2tb3URVWjqr6C1H3n\nJ0TkJ/dDFxH5eeC0qt7Ls5Dk9/ocPZvstbHsbJN0HduvEnslp0TkBQDyPNo4PV8RkYJkKHeoatsN\nZ190aUVV14B/Al61T7q8FniriHwH+AzwUyJyxz7pcmnZi8BoJohzpO4wN5B85Sse4Ofj3sCFAf77\n8v0/5MLgsSRxbh8jJ24vgw4CfAr46I7n90OXw8BSvt8Fvga8aT902aHXLcCd+/W7PKd+e3GQHT/I\nz5KQoEeB2/bgeJ8BngZqUrz0TmAZ+DLwCPDFduHk978/6/YQ8ObLqMfrST75MeDefHvLPunyo8A3\nsy73Ae/Nz++5Ljv0uoUpGravulzsNqe7zGUuu5R5Bn8uc9mlzI1lLnPZpcyNZS5z2aXMjWUuc9ml\nzI1lLnPZpcyNZS5z2aXMjWUuc9mlzI1lLnPZpfwfVKttii8aM/YAAAAASUVORK5CYII=\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "# or show the bounding box of the referred object\n",
+ "refer.showRef(ref, seg_box='box')\n",
+ "plt.show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 26,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "sent_id[64727]: woman in front\n",
+ "sent_id[64728]: lady smiling\n",
+ "sent_id[64729]: woman\n"
+ ]
+ }
+ ],
+ "source": [
+ "# let's look at the details of each ref\n",
+ "for sent in ref['sentences']:\n",
+ " print 'sent_id[%s]: %s' % (sent['sent_id'], sent['sent'])"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 2",
+ "language": "python",
+ "name": "python2"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 2
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython2",
+ "version": "2.7.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/refer/refer.py b/refer/refer.py
new file mode 100644
index 0000000000000000000000000000000000000000..40878ad963e41216674ee79af2070d7d4724c1b6
--- /dev/null
+++ b/refer/refer.py
@@ -0,0 +1,317 @@
+"""
+This interface provides access to four datasets:
+1) refclef
+2) refcoco
+3) refcoco+
+4) refcocog
+split by unc and google
+
+The following API functions are defined:
+REFER - REFER api class
+getRefIds - get ref ids that satisfy given filter conditions.
+getAnnIds - get ann ids that satisfy given filter conditions.
+getImgIds - get image ids that satisfy given filter conditions.
+getCatIds - get category ids that satisfy given filter conditions.
+loadRefs - load refs with the specified ref ids.
+loadAnns - load anns with the specified ann ids.
+loadImgs - load images with the specified image ids.
+loadCats - load category names with the specified category ids.
+getRefBox - get ref's bounding box [x, y, w, h] given the ref_id
+showRef - show image, segmentation or box of the referred object with the ref
+getMask - get mask and area of the referred object given ref
+showMask - show mask of the referred object given ref
+"""
+
+import sys
+import os.path as osp
+import json
+import pickle as pickle
+import time
+import itertools
+import skimage.io as io
+import matplotlib.pyplot as plt
+from matplotlib.collections import PatchCollection
+from matplotlib.patches import Polygon, Rectangle
+from pprint import pprint
+import numpy as np
+from pycocotools import mask
+
+
+class REFER:
+
+ def __init__(self, data_root, dataset='refcoco', splitBy='unc'):
+ # provide data_root folder which contains refclef, refcoco, refcoco+ and refcocog
+ # also provide dataset name and splitBy information
+ # e.g., dataset = 'refcoco', splitBy = 'unc'
+ print('loading dataset %s into memory...' % dataset)
+ if dataset == 'refcocog':
+ print('Split by {}!'.format(splitBy))
+ self.ROOT_DIR = osp.abspath(osp.dirname(__file__))
+ self.DATA_DIR = osp.join(data_root, dataset)
+ if dataset in ['refcoco', 'refcoco+', 'refcocog']:
+ self.IMAGE_DIR = osp.join(data_root, 'images/mscoco/images/train2014')
+ elif dataset == 'refclef':
+ self.IMAGE_DIR = osp.join(data_root, 'images/saiapr_tc-12')
+ else:
+ print('No refer dataset is called [%s]' % dataset)
+ sys.exit()
+
+ # load refs from data/dataset/refs(dataset).json
+ tic = time.time()
+ ref_file = osp.join(self.DATA_DIR, 'refs(' + splitBy + ').p')
+ self.data = {}
+ self.data['dataset'] = dataset
+ f = open(ref_file, 'r')
+ self.data['refs'] = pickle.load(open(ref_file, 'rb'))
+
+ # load annotations from data/dataset/instances.json
+ instances_file = osp.join(self.DATA_DIR, 'instances.json')
+ instances = json.load(open(instances_file, 'r'))
+ self.data['images'] = instances['images']
+ self.data['annotations'] = instances['annotations']
+ self.data['categories'] = instances['categories']
+
+ # create index
+ self.createIndex()
+ print('DONE (t=%.2fs)' % (time.time() - tic))
+
+ def createIndex(self):
+ # create sets of mapping
+ # 1) Refs: {ref_id: ref}
+ # 2) Anns: {ann_id: ann}
+ # 3) Imgs: {image_id: image}
+ # 4) Cats: {category_id: category_name}
+ # 5) Sents: {sent_id: sent}
+ # 6) imgToRefs: {image_id: refs}
+ # 7) imgToAnns: {image_id: anns}
+ # 8) refToAnn: {ref_id: ann}
+ # 9) annToRef: {ann_id: ref}
+ # 10) catToRefs: {category_id: refs}
+ # 11) sentToRef: {sent_id: ref}
+ # 12) sentToTokens: {sent_id: tokens}
+ print('creating index...')
+ # fetch info from instances
+ Anns, Imgs, Cats, imgToAnns = {}, {}, {}, {}
+ for ann in self.data['annotations']:
+ Anns[ann['id']] = ann
+ imgToAnns[ann['image_id']] = imgToAnns.get(ann['image_id'], []) + [ann]
+ for img in self.data['images']:
+ Imgs[img['id']] = img
+ for cat in self.data['categories']:
+ Cats[cat['id']] = cat['name']
+
+ # fetch info from refs
+ Refs, imgToRefs, refToAnn, annToRef, catToRefs = {}, {}, {}, {}, {}
+ Sents, sentToRef, sentToTokens = {}, {}, {}
+ for ref in self.data['refs']:
+ # ids
+ ref_id = ref['ref_id']
+ ann_id = ref['ann_id']
+ category_id = ref['category_id']
+ image_id = ref['image_id']
+
+ # add mapping related to ref
+ Refs[ref_id] = ref
+ imgToRefs[image_id] = imgToRefs.get(image_id, []) + [ref]
+ catToRefs[category_id] = catToRefs.get(category_id, []) + [ref]
+ refToAnn[ref_id] = Anns[ann_id]
+ annToRef[ann_id] = ref
+
+ # add mapping of sent
+ for sent in ref['sentences']:
+ Sents[sent['sent_id']] = sent
+ sentToRef[sent['sent_id']] = ref
+ sentToTokens[sent['sent_id']] = sent['tokens']
+
+ # create class members
+ self.Refs = Refs
+ self.Anns = Anns
+ self.Imgs = Imgs
+ self.Cats = Cats
+ self.Sents = Sents
+ self.imgToRefs = imgToRefs
+ self.imgToAnns = imgToAnns
+ self.refToAnn = refToAnn
+ self.annToRef = annToRef
+ self.catToRefs = catToRefs
+ self.sentToRef = sentToRef
+ self.sentToTokens = sentToTokens
+ print('index created.')
+
+ def getRefIds(self, image_ids=[], cat_ids=[], ref_ids=[], split=''):
+ image_ids = image_ids if type(image_ids) == list else [image_ids]
+ cat_ids = cat_ids if type(cat_ids) == list else [cat_ids]
+ ref_ids = ref_ids if type(ref_ids) == list else [ref_ids]
+
+ if len(image_ids) == len(cat_ids) == len(ref_ids) == len(split) == 0:
+ refs = self.data['refs']
+ else:
+ if not len(image_ids) == 0:
+ refs = [self.imgToRefs[image_id] for image_id in image_ids]
+ else:
+ refs = self.data['refs']
+ if not len(cat_ids) == 0:
+ refs = [ref for ref in refs if ref['category_id'] in cat_ids]
+ if not len(ref_ids) == 0:
+ refs = [ref for ref in refs if ref['ref_id'] in ref_ids]
+ if not len(split) == 0:
+ if split in ['testA', 'testB', 'testC']:
+ refs = [ref for ref in refs if split[-1] in ref['split']] # we also consider testAB, testBC, ...
+ elif split in ['testAB', 'testBC', 'testAC']:
+ refs = [ref for ref in refs if ref['split'] == split] # rarely used I guess...
+ elif split == 'test':
+ refs = [ref for ref in refs if 'test' in ref['split']]
+ elif split == 'train' or split == 'val':
+ refs = [ref for ref in refs if ref['split'] == split]
+ else:
+ print('No such split [%s]' % split)
+ sys.exit()
+ ref_ids = [ref['ref_id'] for ref in refs]
+ return ref_ids
+
+ def getAnnIds(self, image_ids=[], cat_ids=[], ref_ids=[]):
+ image_ids = image_ids if type(image_ids) == list else [image_ids]
+ cat_ids = cat_ids if type(cat_ids) == list else [cat_ids]
+ ref_ids = ref_ids if type(ref_ids) == list else [ref_ids]
+
+ if len(image_ids) == len(cat_ids) == len(ref_ids) == 0:
+ ann_ids = [ann['id'] for ann in self.data['annotations']]
+ else:
+ if not len(image_ids) == 0:
+ lists = [self.imgToAnns[image_id] for image_id in image_ids if
+ image_id in self.imgToAnns] # list of [anns]
+ anns = list(itertools.chain.from_iterable(lists))
+ else:
+ anns = self.data['annotations']
+ if not len(cat_ids) == 0:
+ anns = [ann for ann in anns if ann['category_id'] in cat_ids]
+ ann_ids = [ann['id'] for ann in anns]
+ if not len(ref_ids) == 0:
+ ids = set(ann_ids).intersection(set([self.Refs[ref_id]['ann_id'] for ref_id in ref_ids]))
+ return ann_ids
+
+ def getImgIds(self, ref_ids=[]):
+ ref_ids = ref_ids if type(ref_ids) == list else [ref_ids]
+
+ if not len(ref_ids) == 0:
+ image_ids = list(set([self.Refs[ref_id]['image_id'] for ref_id in ref_ids]))
+ else:
+ image_ids = self.Imgs.keys()
+ return image_ids
+
+ def getCatIds(self):
+ return self.Cats.keys()
+
+ def loadRefs(self, ref_ids=[]):
+ if type(ref_ids) == list:
+ return [self.Refs[ref_id] for ref_id in ref_ids]
+ elif type(ref_ids) == int:
+ return [self.Refs[ref_ids]]
+
+ def loadAnns(self, ann_ids=[]):
+ if type(ann_ids) == list:
+ return [self.Anns[ann_id] for ann_id in ann_ids]
+ elif type(ann_ids) == int or type(ann_ids) == unicode:
+ return [self.Anns[ann_ids]]
+
+ def loadImgs(self, image_ids=[]):
+ if type(image_ids) == list:
+ return [self.Imgs[image_id] for image_id in image_ids]
+ elif type(image_ids) == int:
+ return [self.Imgs[image_ids]]
+
+ def loadCats(self, cat_ids=[]):
+ if type(cat_ids) == list:
+ return [self.Cats[cat_id] for cat_id in cat_ids]
+ elif type(cat_ids) == int:
+ return [self.Cats[cat_ids]]
+
+ def getRefBox(self, ref_id):
+ ref = self.Refs[ref_id]
+ ann = self.refToAnn[ref_id]
+ return ann['bbox'] # [x, y, w, h]
+
+ def showRef(self, ref, seg_box='seg'):
+ ax = plt.gca()
+ # show image
+ image = self.Imgs[ref['image_id']]
+ I = io.imread(osp.join(self.IMAGE_DIR, image['file_name']))
+ ax.imshow(I)
+ # show refer expression
+ for sid, sent in enumerate(ref['sentences']):
+ print('%s. %s' % (sid + 1, sent['sent']))
+ # show segmentations
+ if seg_box == 'seg':
+ ann_id = ref['ann_id']
+ ann = self.Anns[ann_id]
+ polygons = []
+ color = []
+ c = 'none'
+ if type(ann['segmentation'][0]) == list:
+ # polygon used for refcoco*
+ for seg in ann['segmentation']:
+ poly = np.array(seg).reshape((len(seg) / 2, 2))
+ polygons.append(Polygon(poly, True, alpha=0.4))
+ color.append(c)
+ p = PatchCollection(polygons, facecolors=color, edgecolors=(1, 1, 0, 0), linewidths=3, alpha=1)
+ ax.add_collection(p) # thick yellow polygon
+ p = PatchCollection(polygons, facecolors=color, edgecolors=(1, 0, 0, 0), linewidths=1, alpha=1)
+ ax.add_collection(p) # thin red polygon
+ else:
+ # mask used for refclef
+ rle = ann['segmentation']
+ m = mask.decode(rle)
+ img = np.ones((m.shape[0], m.shape[1], 3))
+ color_mask = np.array([2.0, 166.0, 101.0]) / 255
+ for i in range(3):
+ img[:, :, i] = color_mask[i]
+ ax.imshow(np.dstack((img, m * 0.5)))
+ # show bounding-box
+ elif seg_box == 'box':
+ ann_id = ref['ann_id']
+ ann = self.Anns[ann_id]
+ bbox = self.getRefBox(ref['ref_id'])
+ box_plot = Rectangle((bbox[0], bbox[1]), bbox[2], bbox[3], fill=False, edgecolor='green', linewidth=3)
+ ax.add_patch(box_plot)
+
+ def getMask(self, ref):
+ # return mask, area and mask-center
+ ann = self.refToAnn[ref['ref_id']]
+ image = self.Imgs[ref['image_id']]
+
+ if type(ann['segmentation'][0]) == list: # polygon
+ rle = mask.frPyObjects(ann['segmentation'], image['height'], image['width'])
+ else:
+ rle = ann['segmentation']
+
+ m = mask.decode(rle)
+ m = np.sum(m, axis=2) # sometimes there are multiple binary map (corresponding to multiple segs)
+ m = m.astype(np.uint8) # convert to np.uint8
+ # compute area
+ area = sum(mask.area(rle)) # should be close to ann['area']
+ return {'mask': m, 'area': area}
+
+
+ def showMask(self, ref):
+ M = self.getMask(ref)
+ msk = M['mask']
+ ax = plt.gca()
+ ax.imshow(msk)
+
+
+if __name__ == '__main__':
+ refer = REFER(dataset='refcocog', splitBy='google')
+ ref_ids = refer.getRefIds()
+
+ ref_ids = refer.getRefIds(split='train')
+ print('There are %s training referred objects.' % len(ref_ids))
+
+ for ref_id in ref_ids:
+ ref = refer.loadRefs(ref_id)[0]
+ if len(ref['sentences']) < 2:
+ continue
+ print('The label is %s.' % refer.Cats[ref['category_id']])
+ plt.figure()
+ refer.showRef(ref, seg_box='box')
+ plt.show()
+
diff --git a/refer/setup.py b/refer/setup.py
new file mode 100644
index 0000000000000000000000000000000000000000..63b2d4d2da0bc7c05e67d0c5ed884aba97104445
--- /dev/null
+++ b/refer/setup.py
@@ -0,0 +1,21 @@
+from distutils.core import setup
+from Cython.Build import cythonize
+from distutils.extension import Extension
+import numpy as np
+
+ext_modules = [
+ Extension(
+ 'external._mask',
+ sources=['external/maskApi.c', 'external/_mask.pyx'],
+ include_dirs = [np.get_include(), 'external'],
+ extra_compile_args=['-Wno-cpp', '-Wno-unused-function', '-std=c99'],
+ )
+ ]
+
+setup(
+ name='external',
+ packages=['external'],
+ package_dir = {'external': 'external'},
+ version='2.0',
+ ext_modules=cythonize(ext_modules)
+ )
diff --git a/refer/test/sample_expressions_testA.json b/refer/test/sample_expressions_testA.json
new file mode 100644
index 0000000000000000000000000000000000000000..275e3a37ff1a247bf302bcc583190f49bc6cd2ef
--- /dev/null
+++ b/refer/test/sample_expressions_testA.json
@@ -0,0 +1 @@
+{"predictions":[{"sent":"man in black","ref_id":47},{"sent":"person on right","ref_id":109},{"sent":"woman in red","ref_id":110},{"sent":"car behind bike","ref_id":111},{"sent":"car on left","ref_id":112},{"sent":"man in blue","ref_id":382},{"sent":"man in white","ref_id":383},{"sent":"left person","ref_id":519},{"sent":"man on right","ref_id":520},{"sent":"person in background","ref_id":525},{"sent":"person on left","ref_id":526},{"sent":"man in white","ref_id":527},{"sent":"guy in white","ref_id":528},{"sent":"guy in red","ref_id":537},{"sent":"white shirt","ref_id":538},{"sent":"player in white","ref_id":539},{"sent":"red shirt","ref_id":557},{"sent":"girl","ref_id":558},{"sent":"baby","ref_id":588},{"sent":"baby","ref_id":589},{"sent":"woman in front","ref_id":640},{"sent":"girl","ref_id":641},{"sent":"right guy","ref_id":732},{"sent":"man in white","ref_id":733},{"sent":"middle guy","ref_id":734},{"sent":"woman","ref_id":756},{"sent":"man on right","ref_id":757},{"sent":"woman","ref_id":814},{"sent":"man in white","ref_id":815},{"sent":"man in white shirt","ref_id":828},{"sent":"woman on right","ref_id":829},{"sent":"man in red","ref_id":931},{"sent":"woman in pink","ref_id":932},{"sent":"girl in pink","ref_id":933},{"sent":"middle guy","ref_id":945},{"sent":"second from right","ref_id":946},{"sent":"left guy","ref_id":947},{"sent":"white jacket","ref_id":954},{"sent":"right guy","ref_id":955},{"sent":"blue jacket","ref_id":956},{"sent":"man in white shirt","ref_id":1023},{"sent":"man","ref_id":1024},{"sent":"man in back","ref_id":1052},{"sent":"left guy","ref_id":1053},{"sent":"woman on right","ref_id":1152},{"sent":"woman on right","ref_id":1153},{"sent":"left guy","ref_id":1154},{"sent":"woman on right","ref_id":1333},{"sent":"man in black shirt","ref_id":1334},{"sent":"man","ref_id":1362},{"sent":"man","ref_id":1363},{"sent":"right guy","ref_id":1371},{"sent":"left guy","ref_id":1372},{"sent":"man in front","ref_id":1406},{"sent":"man on left","ref_id":1407},{"sent":"person on right","ref_id":1568},{"sent":"person in front","ref_id":1569},{"sent":"man in black","ref_id":1582},{"sent":"man in front","ref_id":1583},{"sent":"right skier","ref_id":1623},{"sent":"person in front","ref_id":1624},{"sent":"second from left","ref_id":1679},{"sent":"man on left","ref_id":1680},{"sent":"second from right","ref_id":1681},{"sent":"left guy","ref_id":1682},{"sent":"woman on right","ref_id":1683},{"sent":"girl on right","ref_id":1684},{"sent":"man on right","ref_id":1811},{"sent":"man in front of man in white shirt","ref_id":1812},{"sent":"woman in white shirt","ref_id":1861},{"sent":"man in black","ref_id":1862},{"sent":"groom","ref_id":1882},{"sent":"bride","ref_id":1883},{"sent":"middle guy","ref_id":1977},{"sent":"left guy","ref_id":1978},{"sent":"right guy","ref_id":1979},{"sent":"second from left","ref_id":1980},{"sent":"person on left","ref_id":1990},{"sent":"left person","ref_id":1991},{"sent":"player","ref_id":2001},{"sent":"top left corner","ref_id":2002},{"sent":"girl in white on left","ref_id":2129},{"sent":"white shirt","ref_id":2130},{"sent":"woman in white","ref_id":2131},{"sent":"red jacket","ref_id":2173},{"sent":"red","ref_id":2174},{"sent":"catcher","ref_id":2256},{"sent":"umpire","ref_id":2257},{"sent":"baby","ref_id":2264},{"sent":"man","ref_id":2265},{"sent":"boy in blue","ref_id":2291},{"sent":"boy in red","ref_id":2292},{"sent":"man in black","ref_id":2375},{"sent":"man in black","ref_id":2376},{"sent":"blue jacket","ref_id":2721},{"sent":"bottom left","ref_id":2722},{"sent":"man","ref_id":2767},{"sent":"man","ref_id":2768},{"sent":"batter","ref_id":2805},{"sent":"right guy","ref_id":2806},{"sent":"batter","ref_id":2807},{"sent":"woman in black","ref_id":2981},{"sent":"woman in white","ref_id":2982},{"sent":"left girl","ref_id":3247},{"sent":"man in white","ref_id":3248},{"sent":"man on left","ref_id":3257},{"sent":"woman in middle","ref_id":3258},{"sent":"woman on right","ref_id":3259},{"sent":"man in middle","ref_id":3260},{"sent":"guy on right","ref_id":3366},{"sent":"left person","ref_id":3367},{"sent":"girl in pink","ref_id":3768},{"sent":"girl in pink","ref_id":3769},{"sent":"right guy","ref_id":3772},{"sent":"man","ref_id":3773},{"sent":"man in blue shirt","ref_id":3805},{"sent":"person in blue shirt","ref_id":3806},{"sent":"man in black","ref_id":3807},{"sent":"guy in red","ref_id":4002},{"sent":"second horse from left","ref_id":4003},{"sent":"guy in blue shirt","ref_id":4014},{"sent":"man in blue shirt","ref_id":4015},{"sent":"left person","ref_id":4016},{"sent":"man in blue","ref_id":4017},{"sent":"girl on right","ref_id":4089},{"sent":"girl","ref_id":4090},{"sent":"woman","ref_id":4101},{"sent":"girl","ref_id":4102},{"sent":"woman in black","ref_id":4143},{"sent":"person sitting on left","ref_id":4144},{"sent":"man in black","ref_id":4145},{"sent":"white shirt","ref_id":4159},{"sent":"man on right","ref_id":4160},{"sent":"right girl","ref_id":4174},{"sent":"left girl","ref_id":4175},{"sent":"person on right","ref_id":4176},{"sent":"girl on left","ref_id":4177},{"sent":"woman in red","ref_id":4178},{"sent":"bride","ref_id":4187},{"sent":"right kid","ref_id":4188},{"sent":"person on left","ref_id":4190},{"sent":"woman in black","ref_id":4191},{"sent":"woman in white","ref_id":4192},{"sent":"left person","ref_id":4308},{"sent":"person on left","ref_id":4309},{"sent":"man on left","ref_id":4333},{"sent":"man in blue shirt","ref_id":4334},{"sent":"batter","ref_id":4345},{"sent":"catcher","ref_id":4346},{"sent":"umpire","ref_id":4347},{"sent":"man on right","ref_id":4396},{"sent":"right person","ref_id":4397},{"sent":"woman in blue","ref_id":4461},{"sent":"man in black","ref_id":4462},{"sent":"man on left","ref_id":4463},{"sent":"woman on right","ref_id":4485},{"sent":"woman","ref_id":4486},{"sent":"right horse","ref_id":4487},{"sent":"right horse","ref_id":4488},{"sent":"left horse","ref_id":4489},{"sent":"woman","ref_id":4578},{"sent":"woman","ref_id":4579},{"sent":"left bottom corner","ref_id":4580},{"sent":"woman in blue","ref_id":4581},{"sent":"head bottom right","ref_id":4582},{"sent":"woman","ref_id":4616},{"sent":"man","ref_id":4617},{"sent":"woman in black","ref_id":4711},{"sent":"left horse","ref_id":4712},{"sent":"umpire","ref_id":4765},{"sent":"catcher","ref_id":4766},{"sent":"man on right","ref_id":4868},{"sent":"man on right","ref_id":4869},{"sent":"man","ref_id":4870},{"sent":"white shirt","ref_id":5012},{"sent":"right guy","ref_id":5118},{"sent":"man in white","ref_id":5119},{"sent":"woman","ref_id":5149},{"sent":"man","ref_id":5150},{"sent":"right dog","ref_id":5170},{"sent":"left dog","ref_id":5171},{"sent":"woman","ref_id":5244},{"sent":"woman","ref_id":5245},{"sent":"woman in black","ref_id":5289},{"sent":"woman in black","ref_id":5290},{"sent":"man on right","ref_id":5291},{"sent":"person on left","ref_id":5292},{"sent":"woman in black","ref_id":5293},{"sent":"person on right","ref_id":5309},{"sent":"woman in red","ref_id":5310},{"sent":"girl","ref_id":5311},{"sent":"person on right","ref_id":5389},{"sent":"man","ref_id":5390},{"sent":"man","ref_id":5550},{"sent":"woman on right","ref_id":5551},{"sent":"man on left","ref_id":5552},{"sent":"man in white","ref_id":5615},{"sent":"woman in red","ref_id":5648},{"sent":"woman in blue","ref_id":5649},{"sent":"woman in black","ref_id":5650},{"sent":"catcher","ref_id":5767},{"sent":"batter","ref_id":5768},{"sent":"umpire","ref_id":5769},{"sent":"couch on left","ref_id":5776},{"sent":"couch on right","ref_id":5777},{"sent":"guy in red","ref_id":5782},{"sent":"red shirt","ref_id":5783},{"sent":"batter","ref_id":5784},{"sent":"girl on left","ref_id":5811},{"sent":"woman","ref_id":5812},{"sent":"bowl of white stuff on left","ref_id":5923},{"sent":"bowl of rice in the back","ref_id":5924},{"sent":"top right glass","ref_id":5925},{"sent":"top left corner","ref_id":5926},{"sent":"skier in middle","ref_id":6042},{"sent":"left skier","ref_id":6043},{"sent":"guy in white shirt","ref_id":6073},{"sent":"man","ref_id":6074},{"sent":"man in black","ref_id":6081},{"sent":"guy in white","ref_id":6082},{"sent":"umpire","ref_id":6118},{"sent":"batter","ref_id":6119},{"sent":"right guy","ref_id":6210},{"sent":"guy on right","ref_id":6211},{"sent":"kid","ref_id":6237},{"sent":"kid","ref_id":6238},{"sent":"baby","ref_id":6239},{"sent":"woman","ref_id":6257},{"sent":"person on right","ref_id":6461},{"sent":"woman","ref_id":6462},{"sent":"hand on left","ref_id":6607},{"sent":"left kid","ref_id":6698},{"sent":"player in red","ref_id":6699},{"sent":"player","ref_id":6799},{"sent":"guy in white","ref_id":6800},{"sent":"right person","ref_id":6809},{"sent":"guy in red","ref_id":6810},{"sent":"catcher","ref_id":6811},{"sent":"catcher","ref_id":6812},{"sent":"guy in white shirt","ref_id":6956},{"sent":"girl in blue","ref_id":6957},{"sent":"red shirt","ref_id":6958},{"sent":"bottom left corner","ref_id":7045},{"sent":"girl in black","ref_id":7046},{"sent":"girl","ref_id":7047},{"sent":"batter","ref_id":7170},{"sent":"batter","ref_id":7171},{"sent":"catcher","ref_id":7172},{"sent":"hand","ref_id":7211},{"sent":"the girl","ref_id":7212},{"sent":"umpire","ref_id":7249},{"sent":"catcher","ref_id":7250},{"sent":"catcher","ref_id":7251},{"sent":"man in white shirt","ref_id":7377},{"sent":"man on right","ref_id":7378},{"sent":"woman in white","ref_id":7379},{"sent":"girl","ref_id":7477},{"sent":"girl","ref_id":7478},{"sent":"man in red shirt","ref_id":7496},{"sent":"guy in red shirt","ref_id":7497},{"sent":"red shirt","ref_id":7498},{"sent":"red shirt","ref_id":7499},{"sent":"woman in blue","ref_id":7568},{"sent":"woman in white","ref_id":7569},{"sent":"man on left","ref_id":7570},{"sent":"woman on left","ref_id":7571},{"sent":"woman on right","ref_id":7572},{"sent":"man in blue shirt","ref_id":7573},{"sent":"man in blue","ref_id":7574},{"sent":"person in front of man in white shirt","ref_id":7575},{"sent":"chair on left","ref_id":7576},{"sent":"girl in white","ref_id":7608},{"sent":"left guy","ref_id":7609},{"sent":"man on right","ref_id":7610},{"sent":"boy in middle","ref_id":7611},{"sent":"man on right","ref_id":7612},{"sent":"man in front","ref_id":7613},{"sent":"right guy","ref_id":7710},{"sent":"right guy","ref_id":7711},{"sent":"man on left","ref_id":7712},{"sent":"man in blue shirt","ref_id":7742},{"sent":"man in middle","ref_id":7743},{"sent":"left guy","ref_id":7851},{"sent":"right guy","ref_id":7852},{"sent":"right woman","ref_id":7853},{"sent":"right guy","ref_id":7854},{"sent":"left most person","ref_id":7855},{"sent":"man in white shirt on left","ref_id":7856},{"sent":"man in front","ref_id":7857},{"sent":"girl on right","ref_id":7858},{"sent":"person on left","ref_id":7890},{"sent":"person on right","ref_id":7891},{"sent":"person in background","ref_id":7929},{"sent":"player","ref_id":7930},{"sent":"kid in red","ref_id":7967},{"sent":"girl in green","ref_id":7968},{"sent":"man in white shirt","ref_id":7969},{"sent":"man in white","ref_id":7970},{"sent":"girl","ref_id":7981},{"sent":"girl in pink","ref_id":7982},{"sent":"player in white","ref_id":8005},{"sent":"player in white","ref_id":8006},{"sent":"man in front","ref_id":8086},{"sent":"right person","ref_id":8087},{"sent":"left person","ref_id":8088},{"sent":"guy on bike","ref_id":8089},{"sent":"girl in white","ref_id":8116},{"sent":"player on left","ref_id":8117},{"sent":"player in red","ref_id":8118},{"sent":"right kid","ref_id":8119},{"sent":"right guy","ref_id":8214},{"sent":"second from left","ref_id":8215},{"sent":"second from right","ref_id":8216},{"sent":"left guy","ref_id":8217},{"sent":"man in red","ref_id":8247},{"sent":"woman in black","ref_id":8248},{"sent":"man on left","ref_id":8308},{"sent":"right guy","ref_id":8309},{"sent":"right guy","ref_id":8310},{"sent":"person on right","ref_id":8354},{"sent":"girl in white","ref_id":8355},{"sent":"horse on right","ref_id":8356},{"sent":"horse","ref_id":8357},{"sent":"right UNK","ref_id":8375},{"sent":"woman","ref_id":8376},{"sent":"right UNK","ref_id":8377},{"sent":"person on right","ref_id":8378},{"sent":"man on left","ref_id":8379},{"sent":"person on right","ref_id":8413},{"sent":"man on right","ref_id":8414},{"sent":"woman","ref_id":8415},{"sent":"bride","ref_id":8416},{"sent":"man in blue","ref_id":8490},{"sent":"girl in pink","ref_id":8491},{"sent":"girl in red","ref_id":8492},{"sent":"kid in red","ref_id":8500},{"sent":"number 5","ref_id":8501},{"sent":"guy in white","ref_id":8502},{"sent":"woman in white","ref_id":8685},{"sent":"kid in white","ref_id":8686},{"sent":"man","ref_id":8687},{"sent":"woman sitting down","ref_id":8694},{"sent":"man on right","ref_id":8695},{"sent":"man on right","ref_id":8696},{"sent":"woman on left","ref_id":8697},{"sent":"man in black","ref_id":8698},{"sent":"guy on right","ref_id":8699},{"sent":"left person","ref_id":8700},{"sent":"woman","ref_id":8701},{"sent":"guy on left","ref_id":8758},{"sent":"girl on right","ref_id":8759},{"sent":"woman on right","ref_id":8760},{"sent":"woman on left","ref_id":8770},{"sent":"woman","ref_id":8771},{"sent":"man on left","ref_id":8772},{"sent":"man in white","ref_id":8773},{"sent":"man on left","ref_id":8774},{"sent":"girl in pink","ref_id":8896},{"sent":"girl in white","ref_id":8897},{"sent":"man in black shirt","ref_id":8898},{"sent":"man in black shirt","ref_id":8899},{"sent":"person in white shirt","ref_id":9238},{"sent":"man in black shirt","ref_id":9239},{"sent":"woman on right","ref_id":9255},{"sent":"man in red","ref_id":9256},{"sent":"man in black","ref_id":9493},{"sent":"man","ref_id":9494},{"sent":"woman on left","ref_id":9509},{"sent":"woman in middle","ref_id":9510},{"sent":"man in white shirt","ref_id":9511},{"sent":"second from right","ref_id":9532},{"sent":"second from left","ref_id":9533},{"sent":"guy in white shirt","ref_id":9534},{"sent":"tennis player","ref_id":9760},{"sent":"man in blue","ref_id":9761},{"sent":"woman in middle","ref_id":9799},{"sent":"man in white shirt","ref_id":9800},{"sent":"man","ref_id":9856},{"sent":"woman sitting","ref_id":9857},{"sent":"woman","ref_id":9858},{"sent":"white car behind the guy in the background","ref_id":9865},{"sent":"blue shirt","ref_id":9866},{"sent":"batter","ref_id":9867},{"sent":"guy in background behind fence","ref_id":9949},{"sent":"batter","ref_id":9950},{"sent":"woman","ref_id":10015},{"sent":"girl","ref_id":10016},{"sent":"white mug","ref_id":10047},{"sent":"glass on left","ref_id":10048},{"sent":"woman","ref_id":10049},{"sent":"man on left","ref_id":10114},{"sent":"guy in white shirt","ref_id":10115},{"sent":"girl in pink dress","ref_id":10136},{"sent":"woman in pink","ref_id":10137},{"sent":"person in white shirt","ref_id":10138},{"sent":"left person","ref_id":10139},{"sent":"man in black on right","ref_id":10140},{"sent":"woman in pink","ref_id":10141},{"sent":"player in white","ref_id":10217},{"sent":"red shirt","ref_id":10218},{"sent":"guy in yellow","ref_id":10219},{"sent":"man on left","ref_id":10271},{"sent":"man on right","ref_id":10272},{"sent":"batter","ref_id":10332},{"sent":"right bear","ref_id":10374},{"sent":"bear in red","ref_id":10375},{"sent":"girl in pink","ref_id":10376},{"sent":"boy in middle","ref_id":10377},{"sent":"left guy","ref_id":10412},{"sent":"right guy","ref_id":10413},{"sent":"bottom right girl","ref_id":10449},{"sent":"bottom left head","ref_id":10450},{"sent":"man on left","ref_id":10554},{"sent":"woman","ref_id":10555},{"sent":"right guy","ref_id":10629},{"sent":"man on right","ref_id":10630},{"sent":"man in black shirt","ref_id":10749},{"sent":"man on left","ref_id":10750},{"sent":"woman eating","ref_id":10879},{"sent":"woman in white shirt","ref_id":10895},{"sent":"man in blue shirt","ref_id":10896},{"sent":"girl in red","ref_id":10995},{"sent":"woman","ref_id":10996},{"sent":"couch on right","ref_id":11019},{"sent":"dog","ref_id":11020},{"sent":"woman on left","ref_id":11021},{"sent":"man on right","ref_id":11022},{"sent":"left couch","ref_id":11023},{"sent":"right couch","ref_id":11024},{"sent":"right person","ref_id":11025},{"sent":"woman on left","ref_id":11026},{"sent":"man in middle","ref_id":11027},{"sent":"white pants","ref_id":11032},{"sent":"person in black","ref_id":11033},{"sent":"right kid","ref_id":11101},{"sent":"right kid","ref_id":11102},{"sent":"kid in middle","ref_id":11103},{"sent":"right kid","ref_id":11104},{"sent":"middle person","ref_id":11159},{"sent":"woman on left","ref_id":11160},{"sent":"woman in front","ref_id":11161},{"sent":"man in black shirt and jeans","ref_id":11203},{"sent":"man in blue shirt","ref_id":11204},{"sent":"woman in black","ref_id":11205},{"sent":"catcher","ref_id":11224},{"sent":"batter","ref_id":11225},{"sent":"arm","ref_id":11372},{"sent":"arm","ref_id":11373},{"sent":"man in white","ref_id":11409},{"sent":"guy in red","ref_id":11410},{"sent":"batter","ref_id":11411},{"sent":"woman in red","ref_id":11541},{"sent":"woman on left","ref_id":11637},{"sent":"man on right","ref_id":11638},{"sent":"woman in black","ref_id":11639},{"sent":"person on left","ref_id":11676},{"sent":"person on left","ref_id":11677},{"sent":"girl on left","ref_id":11752},{"sent":"woman","ref_id":11753},{"sent":"man on left","ref_id":11757},{"sent":"right sheep","ref_id":11758},{"sent":"person in front","ref_id":11769},{"sent":"arm","ref_id":11770},{"sent":"right guy","ref_id":11894},{"sent":"blue shirt","ref_id":11895},{"sent":"catcher","ref_id":12001},{"sent":"batter","ref_id":12002},{"sent":"woman on left","ref_id":12023},{"sent":"man","ref_id":12024},{"sent":"second from right","ref_id":12025},{"sent":"man on right","ref_id":12026},{"sent":"man on right","ref_id":12027},{"sent":"second from left","ref_id":12028},{"sent":"man on left","ref_id":12029},{"sent":"second from right","ref_id":12030},{"sent":"woman in red","ref_id":12261},{"sent":"right front animal","ref_id":12262},{"sent":"front cow","ref_id":12263},{"sent":"man in white","ref_id":12357},{"sent":"man","ref_id":12358},{"sent":"person on right","ref_id":12665},{"sent":"woman","ref_id":12666},{"sent":"man in blue","ref_id":12719},{"sent":"woman in red","ref_id":12720},{"sent":"woman in black","ref_id":12721},{"sent":"woman","ref_id":12758},{"sent":"man in blue","ref_id":12759},{"sent":"girl on right","ref_id":12963},{"sent":"woman","ref_id":12964},{"sent":"man on left","ref_id":13055},{"sent":"man on left","ref_id":13056},{"sent":"man on right","ref_id":13057},{"sent":"woman","ref_id":13087},{"sent":"man","ref_id":13088},{"sent":"girl in white","ref_id":13209},{"sent":"man on left","ref_id":13210},{"sent":"bowl of food in front","ref_id":13236},{"sent":"chair top left","ref_id":13237},{"sent":"bowl of food in front","ref_id":13238},{"sent":"top right corner","ref_id":13239},{"sent":"top right corner","ref_id":13373},{"sent":"table top left","ref_id":13374},{"sent":"guy in white shirt","ref_id":13382},{"sent":"guy in white","ref_id":13383},{"sent":"guy on right","ref_id":13386},{"sent":"man on left","ref_id":13387},{"sent":"table cloth","ref_id":13410},{"sent":"woman on left","ref_id":13411},{"sent":"girl on right","ref_id":13412},{"sent":"guy in white shirt","ref_id":13439},{"sent":"man in blue shirt","ref_id":13440},{"sent":"girl in white","ref_id":13441},{"sent":"man in white shirt","ref_id":13442},{"sent":"girl in middle","ref_id":13625},{"sent":"girl on left","ref_id":13626},{"sent":"girl in pink","ref_id":13627},{"sent":"man in blue shirt","ref_id":13793},{"sent":"woman in red","ref_id":13794},{"sent":"girl","ref_id":13869},{"sent":"woman","ref_id":13870},{"sent":"person on left","ref_id":13894},{"sent":"woman in black","ref_id":13895},{"sent":"top left corner","ref_id":14024},{"sent":"left pizza","ref_id":14025},{"sent":"right pizza","ref_id":14026},{"sent":"top right corner","ref_id":14027},{"sent":"white shirt left","ref_id":14038},{"sent":"person in red","ref_id":14039},{"sent":"left bed","ref_id":14040},{"sent":"right person","ref_id":14041},{"sent":"red thing on left","ref_id":14042},{"sent":"person on left","ref_id":14102},{"sent":"person on right","ref_id":14103},{"sent":"person in front","ref_id":14104},{"sent":"person in middle","ref_id":14105},{"sent":"person in blue","ref_id":14106},{"sent":"left chair","ref_id":14201},{"sent":"woman","ref_id":14202},{"sent":"chair on left","ref_id":14203},{"sent":"chair in front of woman","ref_id":14204},{"sent":"man in front","ref_id":14270},{"sent":"man in white shirt","ref_id":14271},{"sent":"bike on right","ref_id":14272},{"sent":"bike in front","ref_id":14273},{"sent":"batter","ref_id":14274},{"sent":"batter","ref_id":14275},{"sent":"umpire","ref_id":14276},{"sent":"batter","ref_id":14277},{"sent":"man","ref_id":14316},{"sent":"woman","ref_id":14317},{"sent":"kid on right","ref_id":14352},{"sent":"kid in middle","ref_id":14353},{"sent":"girl in pink","ref_id":14377},{"sent":"girl on left","ref_id":14378},{"sent":"girl","ref_id":14379},{"sent":"woman in pink","ref_id":14380},{"sent":"woman","ref_id":14482},{"sent":"woman","ref_id":14483},{"sent":"woman in black","ref_id":14519},{"sent":"man in middle","ref_id":14520},{"sent":"hand holding phone","ref_id":14521},{"sent":"woman in black","ref_id":14522},{"sent":"woman in green","ref_id":14523},{"sent":"woman in black","ref_id":14524},{"sent":"man on left","ref_id":14601},{"sent":"left guy","ref_id":14602},{"sent":"guy in black shirt","ref_id":14603},{"sent":"right person","ref_id":14694},{"sent":"guy in blue","ref_id":14695},{"sent":"umpire","ref_id":14755},{"sent":"catcher","ref_id":14756},{"sent":"batter","ref_id":14757},{"sent":"right person","ref_id":14855},{"sent":"left person","ref_id":14856},{"sent":"person on right","ref_id":14857},{"sent":"man in black shirt","ref_id":14869},{"sent":"man in white","ref_id":14870},{"sent":"kid on left","ref_id":14883},{"sent":"kid on right","ref_id":14884},{"sent":"right guy","ref_id":14940},{"sent":"girl in white","ref_id":14941},{"sent":"man on left","ref_id":14968},{"sent":"red bus","ref_id":14969},{"sent":"man in white shirt","ref_id":14981},{"sent":"man in white","ref_id":14982},{"sent":"girl in white shirt","ref_id":15085},{"sent":"man in white shirt","ref_id":15086},{"sent":"girl in white","ref_id":15087},{"sent":"white table in front of girl","ref_id":15088},{"sent":"arm on left","ref_id":15089},{"sent":"right person","ref_id":15092},{"sent":"woman in white","ref_id":15093},{"sent":"man in blue shirt","ref_id":15094},{"sent":"woman on left","ref_id":15253},{"sent":"person on right","ref_id":15254},{"sent":"white shirt","ref_id":15255},{"sent":"woman in white","ref_id":15342},{"sent":"man","ref_id":15343},{"sent":"hand on left","ref_id":15348},{"sent":"right person","ref_id":15349},{"sent":"woman on left","ref_id":15366},{"sent":"left person","ref_id":15367},{"sent":"woman on right","ref_id":15368},{"sent":"person in white shirt","ref_id":15369},{"sent":"person on left","ref_id":15370},{"sent":"guy","ref_id":15394},{"sent":"blue shirt","ref_id":15432},{"sent":"baby","ref_id":15433},{"sent":"woman in red","ref_id":15555},{"sent":"woman in blue","ref_id":15556},{"sent":"left person","ref_id":15563},{"sent":"man in white","ref_id":15564},{"sent":"woman","ref_id":15699},{"sent":"bottom left corner","ref_id":15754},{"sent":"man in white","ref_id":15755},{"sent":"right guy","ref_id":15825},{"sent":"man in suit","ref_id":15826},{"sent":"man on right","ref_id":15986},{"sent":"man","ref_id":15987},{"sent":"top right corner","ref_id":16068},{"sent":"tennis player","ref_id":16069},{"sent":"girl","ref_id":16077},{"sent":"man","ref_id":16078},{"sent":"left woman","ref_id":16126},{"sent":"woman on left","ref_id":16127},{"sent":"woman in middle","ref_id":16128},{"sent":"woman in middle","ref_id":16129},{"sent":"man in black suit","ref_id":16130},{"sent":"man in middle","ref_id":16131},{"sent":"woman","ref_id":16200},{"sent":"woman","ref_id":16201},{"sent":"right player","ref_id":16425},{"sent":"left player","ref_id":16426},{"sent":"right person","ref_id":16543},{"sent":"right person","ref_id":16544},{"sent":"guy in red","ref_id":16545},{"sent":"left blue","ref_id":16566},{"sent":"pink","ref_id":16567},{"sent":"person in white","ref_id":16568},{"sent":"person in front","ref_id":16569},{"sent":"horse","ref_id":16636},{"sent":"man in front","ref_id":16732},{"sent":"man in white","ref_id":16738},{"sent":"right guy","ref_id":16739},{"sent":"right person","ref_id":16740},{"sent":"man on left","ref_id":16741},{"sent":"right guy","ref_id":16786},{"sent":"red shirt","ref_id":16787},{"sent":"man in black","ref_id":16788},{"sent":"man in middle","ref_id":16804},{"sent":"woman on left","ref_id":16805},{"sent":"man in suit","ref_id":16892},{"sent":"top left corner","ref_id":16896},{"sent":"hand on left","ref_id":16897},{"sent":"person on left","ref_id":16898},{"sent":"head of person in front of girl","ref_id":17039},{"sent":"girl","ref_id":17040},{"sent":"left guy","ref_id":17138},{"sent":"woman in black","ref_id":17139},{"sent":"catcher","ref_id":17322},{"sent":"umpire","ref_id":17323},{"sent":"batter","ref_id":17324},{"sent":"man","ref_id":17488},{"sent":"girl","ref_id":17489},{"sent":"top right corner","ref_id":17497},{"sent":"person on left","ref_id":17498},{"sent":"white sheep","ref_id":17523},{"sent":"man on right","ref_id":17524},{"sent":"man on left","ref_id":17545},{"sent":"man on right","ref_id":17546},{"sent":"left person","ref_id":17579},{"sent":"right guy","ref_id":17580},{"sent":"catcher","ref_id":17622},{"sent":"player","ref_id":17623},{"sent":"right side of pizza","ref_id":17629},{"sent":"baby","ref_id":17630},{"sent":"woman on left","ref_id":17643},{"sent":"girl in front","ref_id":17644},{"sent":"woman in black","ref_id":17715},{"sent":"woman in black","ref_id":17716},{"sent":"man on left","ref_id":17717},{"sent":"man in white shirt","ref_id":17731},{"sent":"left guy","ref_id":17732},{"sent":"woman in middle","ref_id":17906},{"sent":"woman in white","ref_id":17907},{"sent":"batter","ref_id":17974},{"sent":"catcher","ref_id":17975},{"sent":"catcher","ref_id":17976},{"sent":"batter","ref_id":17977},{"sent":"guy on right","ref_id":17986},{"sent":"guy","ref_id":17987},{"sent":"woman in white shirt","ref_id":18064},{"sent":"woman in white shirt","ref_id":18065},{"sent":"man in black shirt","ref_id":18066},{"sent":"man on right","ref_id":18067},{"sent":"woman in black","ref_id":18127},{"sent":"woman in black","ref_id":18128},{"sent":"bride","ref_id":18162},{"sent":"woman","ref_id":18163},{"sent":"man","ref_id":18164},{"sent":"right couch","ref_id":18167},{"sent":"man on left","ref_id":18168},{"sent":"right couch","ref_id":18169},{"sent":"left couch","ref_id":18170},{"sent":"man on right","ref_id":18274},{"sent":"table in front of man","ref_id":18275},{"sent":"man in blue","ref_id":18276},{"sent":"man on right","ref_id":18277},{"sent":"bottom right corner","ref_id":18278},{"sent":"right guy","ref_id":18297},{"sent":"tie","ref_id":18298},{"sent":"man in white","ref_id":18325},{"sent":"man in background","ref_id":18326},{"sent":"umbrella","ref_id":18362},{"sent":"person on left","ref_id":18363},{"sent":"pink umbrella","ref_id":18364},{"sent":"girl in pink","ref_id":18448},{"sent":"girl in pink","ref_id":18449},{"sent":"right guy","ref_id":18488},{"sent":"man on right","ref_id":18489},{"sent":"man in blue shirt","ref_id":18490},{"sent":"kid in white shirt","ref_id":18584},{"sent":"guy in white shirt","ref_id":18585},{"sent":"red shirt","ref_id":18586},{"sent":"guy on left","ref_id":18701},{"sent":"left guy","ref_id":18702},{"sent":"guy on right","ref_id":18703},{"sent":"guy in front","ref_id":18704},{"sent":"person on left","ref_id":18738},{"sent":"woman","ref_id":18739},{"sent":"woman","ref_id":18740},{"sent":"woman on right","ref_id":18798},{"sent":"man on right","ref_id":18799},{"sent":"woman in black","ref_id":18800},{"sent":"person in white","ref_id":18804},{"sent":"right guy","ref_id":18805},{"sent":"right UNK","ref_id":18806},{"sent":"right person","ref_id":18846},{"sent":"left person","ref_id":18847},{"sent":"man in white","ref_id":18888},{"sent":"guy on right","ref_id":18889},{"sent":"person on left","ref_id":18912},{"sent":"man","ref_id":18913},{"sent":"right guy","ref_id":18914},{"sent":"red shirt","ref_id":18931},{"sent":"white shirt right","ref_id":18932},{"sent":"person in background in background","ref_id":18933},{"sent":"blurry person in background behind the tennis player","ref_id":18934},{"sent":"blurry person in background on left","ref_id":18935},{"sent":"guy in red shirt","ref_id":18936},{"sent":"guy in white shirt","ref_id":18937},{"sent":"tennis player","ref_id":18938},{"sent":"girl","ref_id":19008},{"sent":"person on left","ref_id":19040},{"sent":"girl in yellow","ref_id":19062},{"sent":"bottom left head","ref_id":19063},{"sent":"man on left","ref_id":19064},{"sent":"right bottom corner","ref_id":19132},{"sent":"person in white on left","ref_id":19133},{"sent":"man in black","ref_id":19134},{"sent":"bottom left corner","ref_id":19279},{"sent":"man","ref_id":19280},{"sent":"girl on right","ref_id":19325},{"sent":"kid","ref_id":19326},{"sent":"girl on right","ref_id":19327},{"sent":"man in blue","ref_id":19348},{"sent":"man on right","ref_id":19349},{"sent":"man on left","ref_id":19428},{"sent":"guy in middle","ref_id":19429},{"sent":"guy on right","ref_id":19430},{"sent":"woman in pink","ref_id":19433},{"sent":"man in blue shirt","ref_id":19434},{"sent":"girl in red","ref_id":19448},{"sent":"left girl","ref_id":19449},{"sent":"girl in red","ref_id":19450},{"sent":"right guy","ref_id":19451},{"sent":"girl on right","ref_id":19469},{"sent":"pizza on right","ref_id":19470},{"sent":"girl","ref_id":19471},{"sent":"woman in white","ref_id":19509},{"sent":"man in black","ref_id":19510},{"sent":"man in front","ref_id":19511},{"sent":"batter","ref_id":19512},{"sent":"guy in blue shirt behind fence","ref_id":19513},{"sent":"person in background on left","ref_id":19514},{"sent":"red shirt","ref_id":19515},{"sent":"man in red","ref_id":19516},{"sent":"person in red","ref_id":19517},{"sent":"woman on left","ref_id":19543},{"sent":"person in background","ref_id":19634},{"sent":"kid","ref_id":19635},{"sent":"left person","ref_id":19684},{"sent":"woman","ref_id":19685},{"sent":"woman in pink","ref_id":19686},{"sent":"woman on right","ref_id":19687},{"sent":"woman","ref_id":19688},{"sent":"girl on right","ref_id":19732},{"sent":"left guy","ref_id":19733},{"sent":"arm","ref_id":19743},{"sent":"right cake","ref_id":19744},{"sent":"cake","ref_id":19745},{"sent":"the arm on the left","ref_id":19746},{"sent":"bottom left hand","ref_id":19843},{"sent":"girl","ref_id":19844},{"sent":"right person","ref_id":19901},{"sent":"left person","ref_id":19902},{"sent":"red jacket","ref_id":19903},{"sent":"second from right","ref_id":19904},{"sent":"woman in red","ref_id":19941},{"sent":"woman on left","ref_id":19942},{"sent":"woman in white","ref_id":20041},{"sent":"woman on right","ref_id":20042},{"sent":"laptop on left","ref_id":20099},{"sent":"middle laptop","ref_id":20100},{"sent":"man in white shirt","ref_id":20101},{"sent":"left laptop","ref_id":20102},{"sent":"man in middle","ref_id":20103},{"sent":"man","ref_id":20246},{"sent":"woman","ref_id":20247},{"sent":"man on left","ref_id":20268},{"sent":"kid","ref_id":20269},{"sent":"right pizza","ref_id":20311},{"sent":"pizza on left","ref_id":20312},{"sent":"pizza on right","ref_id":20313},{"sent":"top right corner","ref_id":20314},{"sent":"arm in back","ref_id":20315},{"sent":"left person","ref_id":20389},{"sent":"kid","ref_id":20390},{"sent":"man in middle","ref_id":20420},{"sent":"man on left","ref_id":20421},{"sent":"top right corner","ref_id":20454},{"sent":"hand","ref_id":20455},{"sent":"woman on left","ref_id":20469},{"sent":"man in middle","ref_id":20470},{"sent":"woman on right","ref_id":20471},{"sent":"woman on right","ref_id":20479},{"sent":"man in front","ref_id":20480},{"sent":"woman","ref_id":20505},{"sent":"woman on left","ref_id":20506},{"sent":"man on left","ref_id":20512},{"sent":"girl in red","ref_id":20513},{"sent":"girl in blue","ref_id":20514},{"sent":"man","ref_id":20602},{"sent":"bride","ref_id":20603},{"sent":"man in black","ref_id":20649},{"sent":"man in black","ref_id":20650},{"sent":"person in front","ref_id":20663},{"sent":"left person","ref_id":20664},{"sent":"person on right","ref_id":20665},{"sent":"white shirt","ref_id":20666},{"sent":"person on right","ref_id":20667},{"sent":"bottom of the UNK","ref_id":20668},{"sent":"black thing on top of suitcase","ref_id":20755},{"sent":"legs on right","ref_id":20756},{"sent":"left leg","ref_id":20757},{"sent":"kid","ref_id":20791},{"sent":"girl","ref_id":20792},{"sent":"man on left","ref_id":20875},{"sent":"man on right","ref_id":20876},{"sent":"woman in middle","ref_id":20877},{"sent":"man in black","ref_id":20938},{"sent":"person on right","ref_id":20939},{"sent":"giraffe","ref_id":20940},{"sent":"giraffe on right","ref_id":20941},{"sent":"man on right","ref_id":20942},{"sent":"man in black","ref_id":20943},{"sent":"left person","ref_id":20944},{"sent":"kid in front","ref_id":20945},{"sent":"batter","ref_id":20946},{"sent":"catcher","ref_id":20947},{"sent":"umpire","ref_id":20948},{"sent":"catcher","ref_id":20954},{"sent":"top left corner","ref_id":20977},{"sent":"person in back","ref_id":20978},{"sent":"baby","ref_id":21019},{"sent":"baby","ref_id":21020},{"sent":"sheep in front","ref_id":21081},{"sent":"right kid","ref_id":21082},{"sent":"girl in pink","ref_id":21083},{"sent":"sheep in front","ref_id":21084},{"sent":"girl on left","ref_id":21122},{"sent":"man on right","ref_id":21123},{"sent":"man in black","ref_id":21124},{"sent":"woman in white","ref_id":21125},{"sent":"left guy","ref_id":21190},{"sent":"woman","ref_id":21191},{"sent":"man","ref_id":21292},{"sent":"white tie","ref_id":21293},{"sent":"UNK","ref_id":21294},{"sent":"left tie","ref_id":21295},{"sent":"left person","ref_id":21296},{"sent":"woman in white","ref_id":21302},{"sent":"man in white","ref_id":21303},{"sent":"girl on left","ref_id":21410},{"sent":"girl on right","ref_id":21411},{"sent":"number 18","ref_id":21422},{"sent":"man in blue shirt","ref_id":21423},{"sent":"number 2","ref_id":21424},{"sent":"left player","ref_id":21425},{"sent":"number 18","ref_id":21426},{"sent":"second from left","ref_id":21433},{"sent":"second board from right","ref_id":21434},{"sent":"right person","ref_id":21435},{"sent":"second from right","ref_id":21436},{"sent":"second from left","ref_id":21437},{"sent":"middle person","ref_id":21438},{"sent":"left person","ref_id":21439},{"sent":"man on left","ref_id":21440},{"sent":"right girl","ref_id":21444},{"sent":"man in white","ref_id":21525},{"sent":"person on right","ref_id":21580},{"sent":"person on left","ref_id":21581},{"sent":"man in red shirt","ref_id":21607},{"sent":"man on left","ref_id":21608},{"sent":"woman in front","ref_id":21609},{"sent":"pizza slice on left","ref_id":21616},{"sent":"pizza slice","ref_id":21617},{"sent":"hand on left","ref_id":21618},{"sent":"white shirt upper right","ref_id":21619},{"sent":"bottom left corner","ref_id":21798},{"sent":"woman","ref_id":21799},{"sent":"girl in blue","ref_id":22059},{"sent":"girl in pink","ref_id":22060},{"sent":"kid on right","ref_id":22061},{"sent":"girl in pink","ref_id":22062},{"sent":"girl in pink","ref_id":22063},{"sent":"girl in white","ref_id":22088},{"sent":"left guy","ref_id":22089},{"sent":"guy in blue","ref_id":22117},{"sent":"red shirt","ref_id":22118},{"sent":"woman","ref_id":22475},{"sent":"girl in yellow","ref_id":22476},{"sent":"right racket","ref_id":22477},{"sent":"man in blue shirt","ref_id":22504},{"sent":"woman on left","ref_id":22505},{"sent":"woman on right","ref_id":22659},{"sent":"woman on left","ref_id":22660},{"sent":"woman in black","ref_id":22715},{"sent":"woman in black","ref_id":22716},{"sent":"man in white","ref_id":22717},{"sent":"woman on right","ref_id":22718},{"sent":"person in white shirt","ref_id":22796},{"sent":"woman in white","ref_id":22797},{"sent":"left person","ref_id":22798},{"sent":"right guy","ref_id":22862},{"sent":"left player","ref_id":22863},{"sent":"woman","ref_id":23015},{"sent":"woman in black","ref_id":23016},{"sent":"person on left","ref_id":23077},{"sent":"man","ref_id":23078},{"sent":"red shirt","ref_id":23129},{"sent":"red shirt","ref_id":23130},{"sent":"girl in pink","ref_id":23131},{"sent":"woman on right","ref_id":23179},{"sent":"woman in black","ref_id":23180},{"sent":"man on left","ref_id":23192},{"sent":"man on right","ref_id":23193},{"sent":"man in white shirt","ref_id":23194},{"sent":"man on left","ref_id":23235},{"sent":"woman in white","ref_id":23236},{"sent":"woman in middle","ref_id":23237},{"sent":"man in middle","ref_id":23249},{"sent":"man on right","ref_id":23250},{"sent":"man on left","ref_id":23251},{"sent":"left person","ref_id":23254},{"sent":"man in white shirt","ref_id":23255},{"sent":"right person","ref_id":23256},{"sent":"catcher","ref_id":23358},{"sent":"umpire","ref_id":23359},{"sent":"man in white","ref_id":23410},{"sent":"woman in red","ref_id":23411},{"sent":"girl in pink","ref_id":23412},{"sent":"baby","ref_id":23564},{"sent":"baby","ref_id":23565},{"sent":"bottom right bowl","ref_id":23652},{"sent":"woman in white","ref_id":23653},{"sent":"woman in white","ref_id":23654},{"sent":"top right corner","ref_id":23857},{"sent":"man in white shirt","ref_id":23863},{"sent":"guy in blue","ref_id":23864},{"sent":"guy in blue shirt","ref_id":23865},{"sent":"girl in yellow","ref_id":23866},{"sent":"man on left","ref_id":23890},{"sent":"woman on right","ref_id":23891},{"sent":"girl","ref_id":23910},{"sent":"girl","ref_id":23911},{"sent":"woman in red","ref_id":23919},{"sent":"person in front","ref_id":23920},{"sent":"right guy","ref_id":24045},{"sent":"woman in white","ref_id":24046},{"sent":"woman in middle","ref_id":24058},{"sent":"man in middle","ref_id":24059},{"sent":"man in white shirt","ref_id":24060},{"sent":"man on right","ref_id":24061},{"sent":"man on left","ref_id":24062},{"sent":"woman in purple","ref_id":24063},{"sent":"person on right","ref_id":24064},{"sent":"woman in middle","ref_id":24237},{"sent":"baby","ref_id":24238},{"sent":"left guy","ref_id":24265},{"sent":"man on right","ref_id":24266},{"sent":"man in black","ref_id":24267},{"sent":"right player","ref_id":24297},{"sent":"left player","ref_id":24298},{"sent":"man on left","ref_id":24320},{"sent":"man in white","ref_id":24321},{"sent":"man on right","ref_id":24322},{"sent":"man on right","ref_id":24323},{"sent":"man in blue","ref_id":24369},{"sent":"left glass","ref_id":24370},{"sent":"man in red","ref_id":24371},{"sent":"left guy","ref_id":24381},{"sent":"person on right","ref_id":24382},{"sent":"man on right","ref_id":24396},{"sent":"man in black","ref_id":24397},{"sent":"girl on left","ref_id":24454},{"sent":"girl in white","ref_id":24455},{"sent":"man on right","ref_id":24456},{"sent":"man in white","ref_id":24491},{"sent":"person on left","ref_id":24492},{"sent":"woman on right","ref_id":24493},{"sent":"man in blue shirt","ref_id":24510},{"sent":"woman in back","ref_id":24511},{"sent":"woman","ref_id":24512},{"sent":"man in white","ref_id":24513},{"sent":"right person","ref_id":24525},{"sent":"left guy","ref_id":24526},{"sent":"boy in white","ref_id":24527},{"sent":"right hot dog","ref_id":24891},{"sent":"hot dog on left","ref_id":24892},{"sent":"hand on left","ref_id":24893},{"sent":"arm on right","ref_id":24894},{"sent":"man on left","ref_id":24895},{"sent":"man in red shirt","ref_id":24896},{"sent":"woman in white","ref_id":24897},{"sent":"left guy","ref_id":24938},{"sent":"girl in white","ref_id":24939},{"sent":"white shirt","ref_id":25002},{"sent":"man in black shirt on left","ref_id":25003},{"sent":"man in white shirt","ref_id":25004},{"sent":"person in white shirt","ref_id":25077},{"sent":"man","ref_id":25078},{"sent":"person on right","ref_id":25334},{"sent":"man","ref_id":25335},{"sent":"woman","ref_id":25359},{"sent":"man","ref_id":25360},{"sent":"top left black shirt","ref_id":25386},{"sent":"top right corner","ref_id":25387},{"sent":"batter","ref_id":25419},{"sent":"batter","ref_id":25420},{"sent":"right player","ref_id":25471},{"sent":"left player","ref_id":25472},{"sent":"person on left","ref_id":25546},{"sent":"guy in white","ref_id":25547},{"sent":"kid","ref_id":25548},{"sent":"woman","ref_id":25631},{"sent":"woman in white","ref_id":25632},{"sent":"man on right","ref_id":25753},{"sent":"boy in yellow","ref_id":25754},{"sent":"white shirt","ref_id":25800},{"sent":"woman","ref_id":25801},{"sent":"hand","ref_id":25802},{"sent":"umpire","ref_id":25818},{"sent":"batter","ref_id":25819},{"sent":"man","ref_id":26042},{"sent":"girl","ref_id":26043},{"sent":"man on right","ref_id":26086},{"sent":"man in black","ref_id":26087},{"sent":"man in black shirt","ref_id":26088},{"sent":"man on left","ref_id":26089},{"sent":"left most person","ref_id":26263},{"sent":"second board from right","ref_id":26264},{"sent":"man on left","ref_id":26265},{"sent":"guy in middle","ref_id":26266},{"sent":"person in middle","ref_id":26267},{"sent":"man","ref_id":26498},{"sent":"woman","ref_id":26499},{"sent":"guy on right","ref_id":26509},{"sent":"kid in white","ref_id":26510},{"sent":"person on right","ref_id":26571},{"sent":"person on right","ref_id":26572},{"sent":"batter","ref_id":26628},{"sent":"umpire","ref_id":26629},{"sent":"catcher","ref_id":26630},{"sent":"person on left","ref_id":26631},{"sent":"man in white shirt","ref_id":26632},{"sent":"woman in black dress","ref_id":26633},{"sent":"woman in black","ref_id":26634},{"sent":"person in white shirt","ref_id":26684},{"sent":"man in white","ref_id":26685},{"sent":"man in middle","ref_id":26686},{"sent":"woman in front","ref_id":26698},{"sent":"woman","ref_id":26699},{"sent":"man in front","ref_id":26744},{"sent":"right person","ref_id":26745},{"sent":"table on right","ref_id":26749},{"sent":"right kid","ref_id":26750},{"sent":"boy in white shirt","ref_id":26751},{"sent":"left table","ref_id":26752},{"sent":"person on right","ref_id":26856},{"sent":"left person","ref_id":26857},{"sent":"bottom left head","ref_id":26858},{"sent":"guy in white","ref_id":26877},{"sent":"guy in black","ref_id":26878},{"sent":"man in front","ref_id":26977},{"sent":"person on left","ref_id":26978},{"sent":"kid in red","ref_id":27212},{"sent":"kid in red","ref_id":27213},{"sent":"left person","ref_id":27366},{"sent":"bottom left corner","ref_id":27367},{"sent":"bottom sandwich","ref_id":27368},{"sent":"boy in blue","ref_id":27369},{"sent":"man in blue","ref_id":27370},{"sent":"person on right","ref_id":27447},{"sent":"person on left","ref_id":27448},{"sent":"man on left with hat","ref_id":27489},{"sent":"horse in front","ref_id":27490},{"sent":"horse on right","ref_id":27491},{"sent":"man in front with blue hat","ref_id":27492},{"sent":"horse on left","ref_id":27493},{"sent":"man in blue shirt","ref_id":27550},{"sent":"man in white","ref_id":27551},{"sent":"person on right","ref_id":27643},{"sent":"person in middle","ref_id":27644},{"sent":"chef on right","ref_id":27684},{"sent":"left chef","ref_id":27685},{"sent":"bottom right phone","ref_id":27717},{"sent":"bottom left hand","ref_id":27718},{"sent":"man on right","ref_id":27728},{"sent":"man in white shirt","ref_id":27729},{"sent":"man","ref_id":27775},{"sent":"bride","ref_id":27776},{"sent":"man in white shirt","ref_id":27946},{"sent":"woman in white","ref_id":27947},{"sent":"blue shirt","ref_id":27948},{"sent":"woman in white","ref_id":27949},{"sent":"bottom right corner","ref_id":27950},{"sent":"man on right","ref_id":27957},{"sent":"man","ref_id":27958},{"sent":"person in front","ref_id":28059},{"sent":"person in middle","ref_id":28060},{"sent":"the umbrella","ref_id":28061},{"sent":"man in white","ref_id":28068},{"sent":"man in blue","ref_id":28069},{"sent":"man in white shirt","ref_id":28070},{"sent":"person on left","ref_id":28146},{"sent":"man in white","ref_id":28147},{"sent":"person in blue","ref_id":28386},{"sent":"person in front","ref_id":28387},{"sent":"woman","ref_id":28388},{"sent":"man on right","ref_id":28389},{"sent":"woman","ref_id":28390},{"sent":"catcher","ref_id":28414},{"sent":"batter","ref_id":28415},{"sent":"baby","ref_id":28479},{"sent":"baby","ref_id":28480},{"sent":"left guy","ref_id":28761},{"sent":"right person","ref_id":28762},{"sent":"right skier","ref_id":28788},{"sent":"middle person","ref_id":28789},{"sent":"blue shirt","ref_id":28790},{"sent":"girl in pink","ref_id":28791},{"sent":"baby","ref_id":28792},{"sent":"arm on left","ref_id":28825},{"sent":"arm on right","ref_id":28826},{"sent":"man in white shirt","ref_id":28827},{"sent":"woman in black","ref_id":28874},{"sent":"boy in blue shirt","ref_id":28875},{"sent":"man on left","ref_id":29002},{"sent":"right guy","ref_id":29003},{"sent":"person on left","ref_id":29017},{"sent":"person on right","ref_id":29018},{"sent":"man in white shirt","ref_id":29100},{"sent":"guy in red shirt","ref_id":29101},{"sent":"player in front","ref_id":29102},{"sent":"girl","ref_id":29236},{"sent":"left girl","ref_id":29237},{"sent":"person on right","ref_id":29341},{"sent":"tennis player","ref_id":29342},{"sent":"guy on bike","ref_id":29390},{"sent":"left bike","ref_id":29391},{"sent":"bike on right","ref_id":29392},{"sent":"bike","ref_id":29393},{"sent":"left","ref_id":29417},{"sent":"woman","ref_id":29418},{"sent":"batter","ref_id":29448},{"sent":"batter","ref_id":29449},{"sent":"man in blue shirt","ref_id":29536},{"sent":"man in white shirt","ref_id":29537},{"sent":"man in white","ref_id":29538},{"sent":"right person","ref_id":29560},{"sent":"tennis player","ref_id":29561},{"sent":"left person","ref_id":29637},{"sent":"right person","ref_id":29638},{"sent":"woman in front","ref_id":29639},{"sent":"woman in black","ref_id":29640},{"sent":"person on right","ref_id":29677},{"sent":"left bed","ref_id":29678},{"sent":"woman on left","ref_id":29811},{"sent":"man","ref_id":29812},{"sent":"person in front","ref_id":29882},{"sent":"woman","ref_id":29883},{"sent":"girl","ref_id":29908},{"sent":"cake in front of cake","ref_id":29909},{"sent":"baby","ref_id":29910},{"sent":"man in black shirt","ref_id":29956},{"sent":"kid in white","ref_id":29957},{"sent":"man on left","ref_id":30096},{"sent":"guy in back","ref_id":30097},{"sent":"man in white shirt","ref_id":30098},{"sent":"man","ref_id":30099},{"sent":"woman in red","ref_id":30357},{"sent":"person on right","ref_id":30358},{"sent":"man","ref_id":30469},{"sent":"kid","ref_id":30470},{"sent":"woman","ref_id":30495},{"sent":"man","ref_id":30496},{"sent":"left person","ref_id":30516},{"sent":"woman","ref_id":30517},{"sent":"right person","ref_id":30525},{"sent":"girl","ref_id":30526},{"sent":"right girl","ref_id":30556},{"sent":"second from left","ref_id":30557},{"sent":"girl on right","ref_id":30558},{"sent":"girl in white","ref_id":30559},{"sent":"girl in middle","ref_id":30560},{"sent":"girl on left","ref_id":30561},{"sent":"right girl","ref_id":30562},{"sent":"second from left","ref_id":30563},{"sent":"man on right","ref_id":30637},{"sent":"right laptop","ref_id":30638},{"sent":"bottom left laptop","ref_id":30639},{"sent":"woman on left","ref_id":30640},{"sent":"man","ref_id":30676},{"sent":"man on right","ref_id":30677},{"sent":"left person","ref_id":30731},{"sent":"person in black","ref_id":30732},{"sent":"left guy","ref_id":30769},{"sent":"right guy","ref_id":30770},{"sent":"arm on left","ref_id":30803},{"sent":"girl in red","ref_id":30804},{"sent":"man on left","ref_id":30805},{"sent":"hand holding scissors","ref_id":30888},{"sent":"hand","ref_id":30889},{"sent":"left girl","ref_id":30959},{"sent":"baby","ref_id":30960},{"sent":"left bench","ref_id":31198},{"sent":"right bike","ref_id":31199},{"sent":"man in white shirt","ref_id":31201},{"sent":"person on right","ref_id":31202},{"sent":"woman","ref_id":31203},{"sent":"woman on right","ref_id":31206},{"sent":"woman","ref_id":31207},{"sent":"batter","ref_id":31356},{"sent":"catcher","ref_id":31357},{"sent":"batter","ref_id":31358},{"sent":"left edge of pic","ref_id":31459},{"sent":"arm on right","ref_id":31460},{"sent":"man on right","ref_id":31461},{"sent":"man","ref_id":31462},{"sent":"catcher","ref_id":31505},{"sent":"batter","ref_id":31506},{"sent":"girl on right","ref_id":31547},{"sent":"woman","ref_id":31548},{"sent":"chair on left","ref_id":31552},{"sent":"woman","ref_id":31553},{"sent":"girl","ref_id":31554},{"sent":"girl","ref_id":31555},{"sent":"left guy","ref_id":31572},{"sent":"right girl","ref_id":31573},{"sent":"right woman","ref_id":31597},{"sent":"man on right","ref_id":31598},{"sent":"woman in middle","ref_id":31599},{"sent":"woman in middle","ref_id":31600},{"sent":"man on left","ref_id":31601},{"sent":"player in white","ref_id":31762},{"sent":"red shirt right","ref_id":31763},{"sent":"blue shirt","ref_id":31764},{"sent":"guy in white shirt","ref_id":31765},{"sent":"person in black","ref_id":31799},{"sent":"person on right","ref_id":31800},{"sent":"man on right","ref_id":31816},{"sent":"man in suit","ref_id":31817},{"sent":"man on left","ref_id":31818},{"sent":"person on left","ref_id":31862},{"sent":"girl","ref_id":31863},{"sent":"person in black on right","ref_id":31866},{"sent":"yellow shirt","ref_id":31867},{"sent":"man","ref_id":31955},{"sent":"girl","ref_id":31956},{"sent":"left girl","ref_id":32164},{"sent":"boy on right","ref_id":32217},{"sent":"boy in blue","ref_id":32218},{"sent":"man on right","ref_id":32234},{"sent":"man","ref_id":32235},{"sent":"baby","ref_id":32298},{"sent":"man on left","ref_id":32299},{"sent":"black suitcase","ref_id":32432},{"sent":"black bag","ref_id":32433},{"sent":"catcher","ref_id":32508},{"sent":"batter","ref_id":32509},{"sent":"woman on left","ref_id":32582},{"sent":"girl in white","ref_id":32583},{"sent":"batter","ref_id":32584},{"sent":"umpire","ref_id":32585},{"sent":"woman","ref_id":32644},{"sent":"girl","ref_id":32645},{"sent":"right sheep","ref_id":32646},{"sent":"left sheep","ref_id":32647},{"sent":"right guy","ref_id":32804},{"sent":"man in middle","ref_id":32805},{"sent":"man on left","ref_id":32806},{"sent":"man in middle","ref_id":32807},{"sent":"man in white","ref_id":32850},{"sent":"guy in white shirt","ref_id":32851},{"sent":"man in white shirt","ref_id":33044},{"sent":"man in blue shirt","ref_id":33045},{"sent":"woman in white","ref_id":33046},{"sent":"man on right","ref_id":33056},{"sent":"man in middle","ref_id":33057},{"sent":"man","ref_id":33058},{"sent":"man","ref_id":33097},{"sent":"man on right","ref_id":33098},{"sent":"bottom right corner","ref_id":33327},{"sent":"man in black shirt","ref_id":33328},{"sent":"catcher","ref_id":33462},{"sent":"batter","ref_id":33463},{"sent":"man on right","ref_id":33599},{"sent":"man","ref_id":33600},{"sent":"right person","ref_id":33622},{"sent":"man on right","ref_id":33623},{"sent":"girl in middle","ref_id":33624},{"sent":"girl on left","ref_id":33625},{"sent":"guy on right","ref_id":33631},{"sent":"left person","ref_id":33632},{"sent":"catcher","ref_id":33633},{"sent":"batter","ref_id":33634},{"sent":"donut in middle","ref_id":33696},{"sent":"hand","ref_id":33697},{"sent":"white shirt","ref_id":33698},{"sent":"man in black shirt","ref_id":33819},{"sent":"woman on right","ref_id":33820},{"sent":"woman in black","ref_id":33821},{"sent":"man in black","ref_id":33822},{"sent":"blue suitcase","ref_id":33922},{"sent":"man in black shirt","ref_id":33923},{"sent":"the seat behind the man","ref_id":33924},{"sent":"woman in black","ref_id":33925},{"sent":"woman in front","ref_id":33926},{"sent":"girl","ref_id":33990},{"sent":"right flower","ref_id":33991},{"sent":"right bottom corner","ref_id":34093},{"sent":"woman in pink","ref_id":34094},{"sent":"woman in black","ref_id":34095},{"sent":"man on left","ref_id":34221},{"sent":"man in white","ref_id":34222},{"sent":"woman in white","ref_id":34389},{"sent":"man in white shirt","ref_id":34390},{"sent":"left girl","ref_id":34391},{"sent":"girl in middle","ref_id":34443},{"sent":"girl in white","ref_id":34444},{"sent":"woman on right","ref_id":34445},{"sent":"woman on left","ref_id":34463},{"sent":"white car","ref_id":34464},{"sent":"man in white","ref_id":34478},{"sent":"man on left","ref_id":34479},{"sent":"man in blue","ref_id":34480},{"sent":"man on right","ref_id":34481},{"sent":"right guy","ref_id":34655},{"sent":"woman","ref_id":34656},{"sent":"person on left","ref_id":34659},{"sent":"girl in front","ref_id":34660},{"sent":"girl on right","ref_id":34661},{"sent":"person in white shirt","ref_id":34708},{"sent":"person in white shirt","ref_id":34709},{"sent":"person in white shirt","ref_id":34710},{"sent":"person in white shirt","ref_id":34711},{"sent":"woman in pink","ref_id":34712},{"sent":"woman in front","ref_id":34713},{"sent":"second from right","ref_id":34716},{"sent":"man on right","ref_id":34717},{"sent":"right person","ref_id":34718},{"sent":"left person","ref_id":34719},{"sent":"right woman","ref_id":34743},{"sent":"woman","ref_id":34744},{"sent":"man on right","ref_id":34745},{"sent":"batter","ref_id":34879},{"sent":"catcher","ref_id":34880},{"sent":"left kid","ref_id":35066},{"sent":"girl on right","ref_id":35067},{"sent":"kid on right","ref_id":35100},{"sent":"kid","ref_id":35101},{"sent":"blue","ref_id":35172},{"sent":"man in front with white shirt","ref_id":35206},{"sent":"second from left","ref_id":35207},{"sent":"green shirt","ref_id":35208},{"sent":"woman in green","ref_id":35209},{"sent":"girl","ref_id":35268},{"sent":"woman","ref_id":35269},{"sent":"bride","ref_id":35305},{"sent":"groom","ref_id":35306},{"sent":"man","ref_id":35319},{"sent":"woman in red","ref_id":35320},{"sent":"man in white","ref_id":35331},{"sent":"man in white shirt","ref_id":35332},{"sent":"guy on right","ref_id":35333},{"sent":"left person","ref_id":35407},{"sent":"woman on right","ref_id":35408},{"sent":"woman","ref_id":35409},{"sent":"umbrella on right","ref_id":35410},{"sent":"top left corner","ref_id":35411},{"sent":"girl in blue","ref_id":35615},{"sent":"woman in white","ref_id":35616},{"sent":"left dog","ref_id":35654},{"sent":"bottom left corner","ref_id":35655},{"sent":"left person","ref_id":35656},{"sent":"right dog","ref_id":35657},{"sent":"man in white shirt","ref_id":35710},{"sent":"woman in white","ref_id":35711},{"sent":"girl","ref_id":35739},{"sent":"guy on left","ref_id":35740},{"sent":"girl","ref_id":35786},{"sent":"girl","ref_id":35787},{"sent":"left guy","ref_id":35794},{"sent":"right girl","ref_id":35795},{"sent":"woman in black","ref_id":35837},{"sent":"left guy","ref_id":35848},{"sent":"right person","ref_id":35849},{"sent":"boy in middle","ref_id":35850},{"sent":"right girl","ref_id":35851},{"sent":"woman in red","ref_id":35965},{"sent":"woman in red","ref_id":35966},{"sent":"man in front","ref_id":35970},{"sent":"player in white","ref_id":35975},{"sent":"guy in blue","ref_id":35976},{"sent":"woman","ref_id":36051},{"sent":"baby","ref_id":36052},{"sent":"batter","ref_id":36160},{"sent":"catcher","ref_id":36161},{"sent":"umpire","ref_id":36162},{"sent":"kid","ref_id":36426},{"sent":"baby","ref_id":36427},{"sent":"woman","ref_id":36545},{"sent":"woman","ref_id":36546},{"sent":"woman on left","ref_id":36691},{"sent":"man on right","ref_id":36692},{"sent":"woman in black","ref_id":36693},{"sent":"man in blue shirt","ref_id":36694},{"sent":"woman on right","ref_id":36779},{"sent":"man","ref_id":36780},{"sent":"woman in back","ref_id":36880},{"sent":"woman in white","ref_id":36881},{"sent":"man on right","ref_id":36911},{"sent":"left person","ref_id":36912},{"sent":"man on left","ref_id":36913},{"sent":"left person","ref_id":36928},{"sent":"girl","ref_id":36929},{"sent":"girl","ref_id":36930},{"sent":"girl","ref_id":36931},{"sent":"batter","ref_id":36993},{"sent":"batter","ref_id":36994},{"sent":"right person","ref_id":36999},{"sent":"left guy","ref_id":37000},{"sent":"umpire","ref_id":37032},{"sent":"batter","ref_id":37033},{"sent":"woman","ref_id":37066},{"sent":"man","ref_id":37067},{"sent":"man on right","ref_id":37125},{"sent":"man","ref_id":37126},{"sent":"red shirt","ref_id":37250},{"sent":"girl","ref_id":37251},{"sent":"girl in blue","ref_id":37286},{"sent":"right guy","ref_id":37287},{"sent":"man on left","ref_id":37288},{"sent":"person in background","ref_id":37431},{"sent":"man","ref_id":37432},{"sent":"left hand","ref_id":37472},{"sent":"hand on right","ref_id":37473},{"sent":"man on right","ref_id":37478},{"sent":"woman","ref_id":37479},{"sent":"man in black shirt","ref_id":37598},{"sent":"man in white shirt","ref_id":37599},{"sent":"man in black shirt","ref_id":37600},{"sent":"kid","ref_id":37756},{"sent":"girl","ref_id":37757},{"sent":"baby","ref_id":37800},{"sent":"woman","ref_id":37801},{"sent":"woman on left","ref_id":37815},{"sent":"man in white shirt","ref_id":37816},{"sent":"man in white shirt","ref_id":37883},{"sent":"white shirt","ref_id":37884},{"sent":"man on right","ref_id":37974},{"sent":"woman","ref_id":37975},{"sent":"left girl","ref_id":38214},{"sent":"man on right","ref_id":38215},{"sent":"right UNK","ref_id":38216},{"sent":"man in black shirt","ref_id":38227},{"sent":"woman in white","ref_id":38228},{"sent":"left guy","ref_id":38274},{"sent":"man in white","ref_id":38275},{"sent":"person on right","ref_id":38276},{"sent":"man in white shirt","ref_id":38340},{"sent":"man in white shirt","ref_id":38341},{"sent":"man on right","ref_id":38390},{"sent":"left woman","ref_id":38391},{"sent":"right guy","ref_id":38392},{"sent":"right guy","ref_id":38417},{"sent":"guy in white","ref_id":38418},{"sent":"guy in white","ref_id":38419},{"sent":"man on right","ref_id":38446},{"sent":"second from right","ref_id":38447},{"sent":"man in middle","ref_id":38448},{"sent":"second from left","ref_id":38449},{"sent":"man on left","ref_id":38504},{"sent":"man on right","ref_id":38505},{"sent":"woman","ref_id":38506},{"sent":"right girl","ref_id":38544},{"sent":"woman on left","ref_id":38545},{"sent":"chair on left","ref_id":38546},{"sent":"chair on right","ref_id":38547},{"sent":"right person","ref_id":38587},{"sent":"man in middle","ref_id":38588},{"sent":"person in middle","ref_id":38589},{"sent":"man in white shirt","ref_id":38650},{"sent":"woman in black","ref_id":38651},{"sent":"left guy","ref_id":38654},{"sent":"man in red shirt","ref_id":38655},{"sent":"woman in black","ref_id":38656},{"sent":"red shirt","ref_id":38742},{"sent":"man in white shirt","ref_id":38743},{"sent":"woman in black","ref_id":38744},{"sent":"man","ref_id":38815},{"sent":"man","ref_id":38816},{"sent":"person on bike","ref_id":38856},{"sent":"person in red","ref_id":38857},{"sent":"left person","ref_id":38938},{"sent":"woman in front","ref_id":38939},{"sent":"white couch","ref_id":39139},{"sent":"woman in red","ref_id":39140},{"sent":"man in white","ref_id":39141},{"sent":"man in white shirt","ref_id":39142},{"sent":"man on right","ref_id":39180},{"sent":"man on right","ref_id":39181},{"sent":"man in white","ref_id":39182},{"sent":"man in white","ref_id":39183},{"sent":"man on right","ref_id":39298},{"sent":"man on left","ref_id":39299},{"sent":"left person","ref_id":39406},{"sent":"man","ref_id":39407},{"sent":"black shirt","ref_id":39550},{"sent":"laptop on right","ref_id":39551},{"sent":"black laptop","ref_id":39552},{"sent":"blue shirt","ref_id":39553},{"sent":"left laptop","ref_id":39554},{"sent":"left hand","ref_id":39555},{"sent":"guy in white","ref_id":39593},{"sent":"guy in red shirt","ref_id":39594},{"sent":"guy in white shirt","ref_id":39595},{"sent":"white shirt","ref_id":39596},{"sent":"kid in red","ref_id":39597},{"sent":"guy in middle","ref_id":39598},{"sent":"kid in blue","ref_id":39599},{"sent":"girl on right","ref_id":39600},{"sent":"man on left","ref_id":39635},{"sent":"man","ref_id":39636},{"sent":"person on right","ref_id":39644},{"sent":"woman","ref_id":39645},{"sent":"white car on right","ref_id":39646},{"sent":"baby","ref_id":39755},{"sent":"baby","ref_id":39756},{"sent":"man in black","ref_id":39839},{"sent":"kid in white","ref_id":39875},{"sent":"woman","ref_id":39876},{"sent":"baby","ref_id":39877},{"sent":"right person","ref_id":39909},{"sent":"person on right","ref_id":39910},{"sent":"woman in front","ref_id":39929},{"sent":"woman on right","ref_id":39930},{"sent":"woman in pink","ref_id":39931},{"sent":"girl in back","ref_id":40011},{"sent":"boy","ref_id":40012},{"sent":"person in black shirt on right","ref_id":40013},{"sent":"player on right","ref_id":40122},{"sent":"catcher","ref_id":40123},{"sent":"man in white","ref_id":40174},{"sent":"woman on left","ref_id":40175},{"sent":"woman in black","ref_id":40313},{"sent":"person on right","ref_id":40314},{"sent":"man on right","ref_id":40315},{"sent":"woman on left","ref_id":40316},{"sent":"woman in black","ref_id":40348},{"sent":"woman on left","ref_id":40349},{"sent":"man in white shirt","ref_id":40374},{"sent":"white hat","ref_id":40375},{"sent":"woman on left","ref_id":40376},{"sent":"player in white","ref_id":40390},{"sent":"man in white","ref_id":40391},{"sent":"right hot dog","ref_id":40496},{"sent":"woman in back","ref_id":40497},{"sent":"right pizza","ref_id":40498},{"sent":"girl in pink","ref_id":40499},{"sent":"man in white shirt","ref_id":40567},{"sent":"left person","ref_id":40568},{"sent":"girl on left","ref_id":40601},{"sent":"person in front","ref_id":40602},{"sent":"woman on right","ref_id":40603},{"sent":"umpire","ref_id":40633},{"sent":"batter","ref_id":40634},{"sent":"batter","ref_id":40635},{"sent":"car on right","ref_id":40695},{"sent":"right guy","ref_id":40696},{"sent":"kid in blue","ref_id":40725},{"sent":"catcher","ref_id":40726},{"sent":"man in middle","ref_id":40808},{"sent":"woman in black","ref_id":40809},{"sent":"woman on left","ref_id":40856},{"sent":"woman on right","ref_id":40857},{"sent":"woman","ref_id":40864},{"sent":"woman in white","ref_id":40865},{"sent":"woman","ref_id":40866},{"sent":"black area above the UNK","ref_id":40878},{"sent":"the man in the middle","ref_id":40879},{"sent":"man on right","ref_id":40880},{"sent":"white shirt","ref_id":40881},{"sent":"man in white","ref_id":41067},{"sent":"man in front","ref_id":41068},{"sent":"girl","ref_id":41212},{"sent":"banana","ref_id":41213},{"sent":"woman","ref_id":41214},{"sent":"woman","ref_id":41215},{"sent":"man on left","ref_id":41296},{"sent":"man","ref_id":41297},{"sent":"man","ref_id":41457},{"sent":"woman","ref_id":41458},{"sent":"woman","ref_id":41478},{"sent":"girl","ref_id":41479},{"sent":"man in red hat","ref_id":41679},{"sent":"man in white shirt","ref_id":41680},{"sent":"woman in blue shirt","ref_id":41681},{"sent":"left elephant","ref_id":41705},{"sent":"elephant on right","ref_id":41706},{"sent":"elephant in back","ref_id":41707},{"sent":"baby","ref_id":41708},{"sent":"baby","ref_id":41709},{"sent":"right person","ref_id":41812},{"sent":"woman in white","ref_id":41889},{"sent":"woman in white","ref_id":41890},{"sent":"person on left","ref_id":41899},{"sent":"man in white","ref_id":41900},{"sent":"woman on right","ref_id":42074},{"sent":"bottle on left","ref_id":42075},{"sent":"right bottle","ref_id":42076},{"sent":"bottle on left","ref_id":42077},{"sent":"man in blue","ref_id":42078},{"sent":"man on left","ref_id":42079},{"sent":"kid in red","ref_id":42080},{"sent":"person in white","ref_id":42189},{"sent":"kid","ref_id":42190},{"sent":"man in white","ref_id":42208},{"sent":"man in white shirt","ref_id":42209},{"sent":"guy in white shirt","ref_id":42257},{"sent":"guy in white shirt","ref_id":42258},{"sent":"guy on right","ref_id":42369},{"sent":"man in black shirt","ref_id":42370},{"sent":"woman","ref_id":42635},{"sent":"man","ref_id":42636},{"sent":"right guy","ref_id":42678},{"sent":"woman","ref_id":42679},{"sent":"right woman","ref_id":42896},{"sent":"person in black under umbrella","ref_id":42897},{"sent":"man on right","ref_id":43003},{"sent":"bottom left table","ref_id":43004},{"sent":"man in white","ref_id":43005},{"sent":"girl","ref_id":43088},{"sent":"woman","ref_id":43089},{"sent":"left guy","ref_id":43150},{"sent":"left person","ref_id":43151},{"sent":"right guy","ref_id":43152},{"sent":"person on right","ref_id":43153},{"sent":"right guy","ref_id":43162},{"sent":"bike on right","ref_id":43175},{"sent":"right blue","ref_id":43176},{"sent":"man on bike","ref_id":43177},{"sent":"front guy","ref_id":43178},{"sent":"bike on left","ref_id":43179},{"sent":"second bike from left","ref_id":43180},{"sent":"bike on right","ref_id":43181},{"sent":"baby","ref_id":43261},{"sent":"left person","ref_id":43262},{"sent":"baby","ref_id":43263},{"sent":"baby","ref_id":43264},{"sent":"table in front","ref_id":43288},{"sent":"woman in middle","ref_id":43289},{"sent":"girl on right","ref_id":43290},{"sent":"woman on left","ref_id":43291},{"sent":"middle chair","ref_id":43292},{"sent":"girl on right","ref_id":43298},{"sent":"woman","ref_id":43299},{"sent":"man in black","ref_id":43300},{"sent":"person in black","ref_id":43311},{"sent":"person in black","ref_id":43312},{"sent":"left girl","ref_id":43313},{"sent":"right girl","ref_id":43314},{"sent":"top left corner","ref_id":43315},{"sent":"person on left","ref_id":43316},{"sent":"right glass","ref_id":43317},{"sent":"glass on right","ref_id":43318},{"sent":"person in back","ref_id":43319},{"sent":"right glass","ref_id":43320},{"sent":"blue car","ref_id":43341},{"sent":"red shirt","ref_id":43342},{"sent":"person on right","ref_id":43343},{"sent":"white car","ref_id":43344},{"sent":"woman","ref_id":43379},{"sent":"man","ref_id":43380},{"sent":"man in white","ref_id":43535},{"sent":"person on left","ref_id":43536},{"sent":"woman in black","ref_id":43537},{"sent":"person on left","ref_id":43538},{"sent":"man in front","ref_id":43539},{"sent":"left bear","ref_id":43647},{"sent":"woman","ref_id":43648},{"sent":"man in blue","ref_id":43944},{"sent":"man in middle","ref_id":43945},{"sent":"person on right","ref_id":43948},{"sent":"woman","ref_id":43949},{"sent":"red shirt","ref_id":43996},{"sent":"guy in blue","ref_id":44027},{"sent":"tennis player","ref_id":44028},{"sent":"right guy","ref_id":44050},{"sent":"man in black","ref_id":44051},{"sent":"kid in red","ref_id":44052},{"sent":"kid in red","ref_id":44357},{"sent":"man in white shirt","ref_id":44358},{"sent":"person in background","ref_id":44417},{"sent":"woman","ref_id":44418},{"sent":"woman","ref_id":44448},{"sent":"man","ref_id":44449},{"sent":"man on left","ref_id":44517},{"sent":"girl","ref_id":44518},{"sent":"woman","ref_id":44519},{"sent":"woman in purple","ref_id":44553},{"sent":"right person","ref_id":44554},{"sent":"red shirt","ref_id":44582},{"sent":"girl in blue shirt","ref_id":44583},{"sent":"guy in black shirt","ref_id":44584},{"sent":"woman in white","ref_id":44628},{"sent":"man on left","ref_id":44629},{"sent":"woman in white","ref_id":44630},{"sent":"girl in middle","ref_id":44633},{"sent":"man on left","ref_id":44634},{"sent":"woman in back","ref_id":44635},{"sent":"left person","ref_id":44642},{"sent":"woman","ref_id":44643},{"sent":"guy in red","ref_id":44644},{"sent":"person on left","ref_id":44645},{"sent":"guy in white","ref_id":44699},{"sent":"right hand","ref_id":44700},{"sent":"man on right","ref_id":44714},{"sent":"woman on left","ref_id":44715},{"sent":"man in black","ref_id":44732},{"sent":"man in red","ref_id":44733},{"sent":"the woman","ref_id":44740},{"sent":"hand","ref_id":44741},{"sent":"man","ref_id":44766},{"sent":"batter","ref_id":45019},{"sent":"catcher","ref_id":45020},{"sent":"red shirt","ref_id":45021},{"sent":"girl in white shirt","ref_id":45046},{"sent":"person on right","ref_id":45047},{"sent":"woman in white","ref_id":45048},{"sent":"left girl","ref_id":45049},{"sent":"guy on left","ref_id":45050},{"sent":"girl in white","ref_id":45051},{"sent":"girl in white","ref_id":45052},{"sent":"catcher","ref_id":45300},{"sent":"batter","ref_id":45301},{"sent":"guy on right","ref_id":45340},{"sent":"woman on left","ref_id":45341},{"sent":"woman in black dress","ref_id":45342},{"sent":"person on right","ref_id":45358},{"sent":"person in middle","ref_id":45359},{"sent":"man on right","ref_id":45367},{"sent":"woman in black","ref_id":45368},{"sent":"woman on right","ref_id":45369},{"sent":"person on left","ref_id":45407},{"sent":"woman in white","ref_id":45434},{"sent":"man in red","ref_id":45435},{"sent":"woman in black","ref_id":45436},{"sent":"guy on bike","ref_id":45600},{"sent":"man","ref_id":45601},{"sent":"man on left","ref_id":45675},{"sent":"man on left","ref_id":45676},{"sent":"man in black","ref_id":45677},{"sent":"woman on left","ref_id":45837},{"sent":"man on left","ref_id":45838},{"sent":"girl on right","ref_id":45839},{"sent":"white shirt","ref_id":45840},{"sent":"woman in red","ref_id":45841},{"sent":"right guy","ref_id":45863},{"sent":"left guy","ref_id":45864},{"sent":"guy in black","ref_id":45865},{"sent":"woman","ref_id":45966},{"sent":"woman","ref_id":45967},{"sent":"catcher","ref_id":46080},{"sent":"batter","ref_id":46081},{"sent":"umpire","ref_id":46165},{"sent":"catcher","ref_id":46166},{"sent":"woman in front","ref_id":46208},{"sent":"woman in front","ref_id":46209},{"sent":"person under umbrella","ref_id":46210},{"sent":"person on left","ref_id":46211},{"sent":"man in white","ref_id":46285},{"sent":"man in red hat","ref_id":46286},{"sent":"man in blue shirt","ref_id":46321},{"sent":"man on right","ref_id":46322},{"sent":"girl in white shirt on right","ref_id":46350},{"sent":"woman in white on right","ref_id":46351},{"sent":"woman in front with hat","ref_id":46352},{"sent":"woman in red","ref_id":46353},{"sent":"woman","ref_id":46393},{"sent":"right arm","ref_id":46394},{"sent":"top right corner","ref_id":46403},{"sent":"person in back","ref_id":46404},{"sent":"kid","ref_id":46405},{"sent":"person in background","ref_id":46451},{"sent":"person on right","ref_id":46452},{"sent":"woman","ref_id":46453},{"sent":"girl on right","ref_id":46555},{"sent":"arm on left","ref_id":46556},{"sent":"man in black shirt","ref_id":46581},{"sent":"man","ref_id":46582},{"sent":"person on left","ref_id":46672},{"sent":"man in black","ref_id":46673},{"sent":"man in middle with glasses","ref_id":46678},{"sent":"woman in white","ref_id":46679},{"sent":"bottom right corner","ref_id":46680},{"sent":"woman on right with black hair","ref_id":46681},{"sent":"woman in front with black hair","ref_id":46682},{"sent":"man in front with glasses","ref_id":46683},{"sent":"man on left","ref_id":46823},{"sent":"woman","ref_id":46824},{"sent":"girl in blue shirt","ref_id":46834},{"sent":"girl in black","ref_id":46835},{"sent":"person on right","ref_id":46836},{"sent":"girl in white shirt","ref_id":46837},{"sent":"groom","ref_id":46838},{"sent":"bride","ref_id":46839},{"sent":"right guy","ref_id":46880},{"sent":"man in white","ref_id":46881},{"sent":"woman on right","ref_id":46938},{"sent":"man on left","ref_id":46939},{"sent":"bottom right bowl","ref_id":46940},{"sent":"glass on left","ref_id":46941},{"sent":"left kid","ref_id":46949},{"sent":"man on right","ref_id":46950},{"sent":"woman in black","ref_id":46951},{"sent":"right person","ref_id":47014},{"sent":"left person","ref_id":47015},{"sent":"man on right","ref_id":47077},{"sent":"man on right","ref_id":47092},{"sent":"man in white shirt","ref_id":47093},{"sent":"man in black","ref_id":47094},{"sent":"guy in white shirt","ref_id":47164},{"sent":"man in white","ref_id":47165},{"sent":"woman in black","ref_id":47319},{"sent":"man in white","ref_id":47391},{"sent":"bottom right corner","ref_id":47392},{"sent":"bottom left person","ref_id":47393},{"sent":"man in black","ref_id":47394},{"sent":"man in middle","ref_id":47441},{"sent":"man in black","ref_id":47442},{"sent":"right guy","ref_id":47443},{"sent":"right guy","ref_id":47446},{"sent":"guy in black","ref_id":47447},{"sent":"person in front","ref_id":47519},{"sent":"woman in front","ref_id":47520},{"sent":"person on right","ref_id":47521},{"sent":"person on left","ref_id":47522},{"sent":"white car","ref_id":47566},{"sent":"woman","ref_id":47567},{"sent":"white shirt","ref_id":47568},{"sent":"right bus","ref_id":47569},{"sent":"woman in black","ref_id":47682},{"sent":"man in suit on right","ref_id":47683},{"sent":"woman in black","ref_id":47684},{"sent":"white shirt left","ref_id":47685},{"sent":"woman in black on left","ref_id":47686},{"sent":"man in black suit","ref_id":47687},{"sent":"man on right","ref_id":47757},{"sent":"woman","ref_id":47758},{"sent":"person on right","ref_id":47777},{"sent":"person on left","ref_id":47778},{"sent":"woman in white","ref_id":47779},{"sent":"woman in red","ref_id":47780},{"sent":"right player","ref_id":47885},{"sent":"left girl","ref_id":47886},{"sent":"right girl","ref_id":47887},{"sent":"left player","ref_id":47888},{"sent":"person in middle","ref_id":47999},{"sent":"person in front","ref_id":48000},{"sent":"right person","ref_id":48001},{"sent":"person in background on left","ref_id":48006},{"sent":"batter","ref_id":48007},{"sent":"man in black","ref_id":48132},{"sent":"woman in blue","ref_id":48133},{"sent":"pizza on right","ref_id":48134},{"sent":"pizza","ref_id":48135},{"sent":"girl","ref_id":48147},{"sent":"girl on right","ref_id":48148},{"sent":"catcher","ref_id":48151},{"sent":"umpire","ref_id":48152},{"sent":"batter","ref_id":48153},{"sent":"man in black shirt","ref_id":48201},{"sent":"man","ref_id":48202},{"sent":"person in front","ref_id":48304},{"sent":"person in middle","ref_id":48305},{"sent":"player in white","ref_id":48322},{"sent":"player","ref_id":48323},{"sent":"man","ref_id":48363},{"sent":"man in black","ref_id":48364},{"sent":"woman","ref_id":48365},{"sent":"man on left","ref_id":48374},{"sent":"right elephant","ref_id":48375},{"sent":"right guy","ref_id":48473},{"sent":"man in middle","ref_id":48474},{"sent":"black umbrella","ref_id":48475},{"sent":"right horse","ref_id":48476},{"sent":"man on left","ref_id":48477},{"sent":"man on left","ref_id":48478},{"sent":"batter","ref_id":48571},{"sent":"catcher","ref_id":48572},{"sent":"girl in blue","ref_id":48610},{"sent":"girl in white","ref_id":48611},{"sent":"man in white shirt","ref_id":48705},{"sent":"woman on right","ref_id":48706},{"sent":"woman in black","ref_id":48707},{"sent":"left player","ref_id":48745},{"sent":"right player","ref_id":48746},{"sent":"man","ref_id":48790},{"sent":"man on right","ref_id":48791},{"sent":"woman","ref_id":48983},{"sent":"man on right","ref_id":48984},{"sent":"man in black","ref_id":49014},{"sent":"guy in background","ref_id":49015},{"sent":"batter","ref_id":49016},{"sent":"player in black","ref_id":49199},{"sent":"player in red","ref_id":49200},{"sent":"player in red","ref_id":49201},{"sent":"man","ref_id":49312},{"sent":"man","ref_id":49313},{"sent":"person on left","ref_id":49315},{"sent":"person in middle","ref_id":49316},{"sent":"head on left","ref_id":49373},{"sent":"person on right","ref_id":49374},{"sent":"man in white","ref_id":49375},{"sent":"woman","ref_id":49457},{"sent":"man","ref_id":49458},{"sent":"woman","ref_id":49538},{"sent":"woman","ref_id":49539},{"sent":"right leg","ref_id":49600},{"sent":"left leg","ref_id":49601},{"sent":"person on right","ref_id":49606},{"sent":"batter","ref_id":49607},{"sent":"person on right","ref_id":49620},{"sent":"person on left","ref_id":49621},{"sent":"man","ref_id":49622},{"sent":"man in black shirt","ref_id":49638},{"sent":"man in blue","ref_id":49639},{"sent":"woman on right","ref_id":49747},{"sent":"woman","ref_id":49748},{"sent":"woman in white","ref_id":49833},{"sent":"woman in black","ref_id":49834},{"sent":"guy in black shirt","ref_id":49932},{"sent":"white shirt","ref_id":49933},{"sent":"man in black","ref_id":47},{"sent":"person on right","ref_id":109},{"sent":"woman in red","ref_id":110},{"sent":"car behind bike","ref_id":111},{"sent":"car on left","ref_id":112},{"sent":"man in blue","ref_id":382},{"sent":"man in white","ref_id":383},{"sent":"left person","ref_id":519},{"sent":"man on right","ref_id":520}]}
\ No newline at end of file
diff --git a/refer/test/sample_expressions_testB.json b/refer/test/sample_expressions_testB.json
new file mode 100644
index 0000000000000000000000000000000000000000..eab973c7cdd1390cfad4fb9dc577104cdd3df90d
--- /dev/null
+++ b/refer/test/sample_expressions_testB.json
@@ -0,0 +1 @@
+{"predictions":[{"sent":"car on left","ref_id":25},{"sent":"car on left","ref_id":26},{"sent":"top sandwich","ref_id":27},{"sent":"top left donut","ref_id":28},{"sent":"zebra on left","ref_id":45},{"sent":"right zebra","ref_id":46},{"sent":"chair in front of man","ref_id":164},{"sent":"bottom right corner","ref_id":165},{"sent":"left chair","ref_id":166},{"sent":"top right corner","ref_id":232},{"sent":"pizza in front","ref_id":233},{"sent":"glass in back","ref_id":234},{"sent":"left glass","ref_id":235},{"sent":"yellow fruit on left","ref_id":259},{"sent":"apple in front","ref_id":260},{"sent":"yellow apple","ref_id":261},{"sent":"orange in the middle","ref_id":262},{"sent":"bottom right orange","ref_id":285},{"sent":"bottom left apple","ref_id":286},{"sent":"bottom right green apple","ref_id":287},{"sent":"second row from bottom right","ref_id":288},{"sent":"white bear","ref_id":299},{"sent":"brown bear","ref_id":300},{"sent":"red vase","ref_id":326},{"sent":"red vase","ref_id":327},{"sent":"vase","ref_id":328},{"sent":"glass on left","ref_id":360},{"sent":"glass of beer","ref_id":361},{"sent":"bottle on right","ref_id":362},{"sent":"bottle on left","ref_id":363},{"sent":"bottle of wine bottle on left","ref_id":364},{"sent":"right horse","ref_id":435},{"sent":"left horse","ref_id":436},{"sent":"train on right","ref_id":474},{"sent":"train on left","ref_id":475},{"sent":"right elephant","ref_id":545},{"sent":"boat on right","ref_id":605},{"sent":"white car","ref_id":629},{"sent":"white car","ref_id":630},{"sent":"left bed","ref_id":668},{"sent":"bed","ref_id":669},{"sent":"right bike","ref_id":677},{"sent":"left bike","ref_id":678},{"sent":"left table","ref_id":721},{"sent":"table","ref_id":722},{"sent":"traffic light","ref_id":837},{"sent":"traffic light","ref_id":838},{"sent":"front bike","ref_id":855},{"sent":"front bike","ref_id":856},{"sent":"blue tie","ref_id":923},{"sent":"left tie","ref_id":924},{"sent":"right tie","ref_id":925},{"sent":"red tie","ref_id":926},{"sent":"monitor on right","ref_id":940},{"sent":"monitor on right","ref_id":941},{"sent":"pizza in front","ref_id":1155},{"sent":"pizza slice","ref_id":1156},{"sent":"bottom left bananas","ref_id":1218},{"sent":"top left bananas","ref_id":1219},{"sent":"pizza in front","ref_id":1227},{"sent":"pizza in front","ref_id":1228},{"sent":"baby elephant","ref_id":1256},{"sent":"big elephant","ref_id":1257},{"sent":"right orange","ref_id":1273},{"sent":"top orange","ref_id":1274},{"sent":"horse on left","ref_id":1283},{"sent":"left fridge","ref_id":1339},{"sent":"fridge in front of the fridge","ref_id":1340},{"sent":"right cow","ref_id":1368},{"sent":"white truck","ref_id":1644},{"sent":"white car","ref_id":1645},{"sent":"broccoli in front","ref_id":1776},{"sent":"broccoli on right","ref_id":1777},{"sent":"second row from right","ref_id":1865},{"sent":"top middle sandwich","ref_id":1866},{"sent":"right most sandwich","ref_id":1867},{"sent":"left sandwich","ref_id":1868},{"sent":"left most sandwich","ref_id":1869},{"sent":"middle row second from right","ref_id":1870},{"sent":"second from right","ref_id":1871},{"sent":"elephant on right","ref_id":2033},{"sent":"elephant","ref_id":2034},{"sent":"second from left","ref_id":2103},{"sent":"right most yellow","ref_id":2104},{"sent":"second row from right","ref_id":2105},{"sent":"top right donut","ref_id":2122},{"sent":"bottom left donut","ref_id":2123},{"sent":"bottom left donut","ref_id":2124},{"sent":"middle donut","ref_id":2125},{"sent":"right donut","ref_id":2126},{"sent":"top right dessert","ref_id":2370},{"sent":"middle dessert","ref_id":2371},{"sent":"bowl","ref_id":2392},{"sent":"left bowl","ref_id":2393},{"sent":"bus on right","ref_id":2467},{"sent":"bus in front","ref_id":2468},{"sent":"left monitor","ref_id":2540},{"sent":"right monitor","ref_id":2541},{"sent":"blurry food in back","ref_id":2576},{"sent":"bottom right corner","ref_id":2577},{"sent":"glass in front of the woman","ref_id":2578},{"sent":"glass on left","ref_id":2579},{"sent":"plant on right","ref_id":2642},{"sent":"top right corner","ref_id":2643},{"sent":"green plant","ref_id":2644},{"sent":"bike on right","ref_id":2692},{"sent":"bike on the right","ref_id":2693},{"sent":"bike on left","ref_id":2694},{"sent":"bottom right red","ref_id":2738},{"sent":"bottom left UNK","ref_id":2739},{"sent":"left cat","ref_id":2857},{"sent":"cat on right","ref_id":2858},{"sent":"left elephant","ref_id":2937},{"sent":"right elephant","ref_id":2938},{"sent":"elephant on right","ref_id":2939},{"sent":"left person","ref_id":2944},{"sent":"person on left","ref_id":2945},{"sent":"top left hot dog","ref_id":2946},{"sent":"sandwich on left","ref_id":2947},{"sent":"front sandwich","ref_id":2948},{"sent":"top right sandwich","ref_id":2949},{"sent":"top sandwich","ref_id":2950},{"sent":"right","ref_id":2960},{"sent":"white and white","ref_id":2961},{"sent":"right train","ref_id":2962},{"sent":"right train","ref_id":2963},{"sent":"plant in the middle","ref_id":3028},{"sent":"right plant","ref_id":3029},{"sent":"left umbrella","ref_id":3125},{"sent":"umbrella","ref_id":3126},{"sent":"second from left","ref_id":3224},{"sent":"right box","ref_id":3225},{"sent":"left UNK","ref_id":3226},{"sent":"left horse","ref_id":3303},{"sent":"horse in front","ref_id":3304},{"sent":"bird on right","ref_id":3403},{"sent":"bird on left","ref_id":3404},{"sent":"table","ref_id":3656},{"sent":"left UNK","ref_id":3657},{"sent":"chair on right","ref_id":3844},{"sent":"top right bunk","ref_id":3845},{"sent":"left bear","ref_id":3875},{"sent":"right bear","ref_id":3876},{"sent":"top suitcase","ref_id":3910},{"sent":"right suitcase","ref_id":3911},{"sent":"bottom right suitcase","ref_id":3912},{"sent":"couch on left","ref_id":3919},{"sent":"right couch","ref_id":3920},{"sent":"left wine bottle","ref_id":3931},{"sent":"right bottle","ref_id":3932},{"sent":"top layer","ref_id":3941},{"sent":"front","ref_id":3942},{"sent":"bear on left","ref_id":3950},{"sent":"bear on right","ref_id":3951},{"sent":"bear on right","ref_id":3952},{"sent":"big bear","ref_id":3954},{"sent":"bottom left bear","ref_id":3955},{"sent":"bottom left suitcase","ref_id":4004},{"sent":"top right corner","ref_id":4005},{"sent":"left sandwich","ref_id":4021},{"sent":"sandwich on left","ref_id":4022},{"sent":"right bottle","ref_id":4072},{"sent":"second from left","ref_id":4073},{"sent":"UNK bottle","ref_id":4074},{"sent":"UNK","ref_id":4075},{"sent":"zebra on left","ref_id":4329},{"sent":"zebra in front","ref_id":4330},{"sent":"zebra on left","ref_id":4500},{"sent":"right zebra","ref_id":4501},{"sent":"glass on left","ref_id":4724},{"sent":"bus on right","ref_id":4806},{"sent":"bottom black","ref_id":4822},{"sent":"black suitcase on right","ref_id":4823},{"sent":"left chair","ref_id":4911},{"sent":"boat on left","ref_id":4912},{"sent":"top right corner","ref_id":4915},{"sent":"top left donut","ref_id":4916},{"sent":"top middle donut","ref_id":4917},{"sent":"bottom right donut","ref_id":4918},{"sent":"middle donut","ref_id":4919},{"sent":"top left apple","ref_id":4925},{"sent":"orange on right","ref_id":4926},{"sent":"orange in middle","ref_id":4927},{"sent":"middle apple","ref_id":4928},{"sent":"bottom right carrot","ref_id":4981},{"sent":"orange carrot","ref_id":4982},{"sent":"bottom right corner","ref_id":4987},{"sent":"car bottom left","ref_id":4988},{"sent":"top left corner","ref_id":5000},{"sent":"the cat","ref_id":5001},{"sent":"bottom right sheep","ref_id":5037},{"sent":"left sheep","ref_id":5038},{"sent":"sheep in front","ref_id":5039},{"sent":"right giraffe","ref_id":5040},{"sent":"right giraffe","ref_id":5041},{"sent":"right remote","ref_id":5072},{"sent":"left remote","ref_id":5073},{"sent":"white bowl of food","ref_id":5074},{"sent":"hot dog","ref_id":5075},{"sent":"top right slice","ref_id":5116},{"sent":"left sandwich","ref_id":5117},{"sent":"bottom left food","ref_id":5178},{"sent":"pizza on right","ref_id":5179},{"sent":"bowl of food on left","ref_id":5180},{"sent":"left bowl","ref_id":5181},{"sent":"left monitor","ref_id":5242},{"sent":"right monitor","ref_id":5243},{"sent":"right cow","ref_id":5298},{"sent":"cow on left","ref_id":5299},{"sent":"left horse","ref_id":5327},{"sent":"horse in front","ref_id":5328},{"sent":"horse on right","ref_id":5329},{"sent":"elephant in middle","ref_id":5340},{"sent":"left elephant","ref_id":5341},{"sent":"white car","ref_id":5491},{"sent":"white car","ref_id":5492},{"sent":"yellow car","ref_id":5493},{"sent":"top middle brown bear","ref_id":5521},{"sent":"bear on right","ref_id":5522},{"sent":"right bear","ref_id":5523},{"sent":"top right bear","ref_id":5524},{"sent":"top right bear","ref_id":5525},{"sent":"bear on left","ref_id":5526},{"sent":"bear in middle","ref_id":5527},{"sent":"toilet on left","ref_id":5645},{"sent":"chair in front","ref_id":5646},{"sent":"right sheep","ref_id":5669},{"sent":"left sheep","ref_id":5670},{"sent":"left chair","ref_id":5694},{"sent":"right bed","ref_id":5695},{"sent":"right train","ref_id":5797},{"sent":"right train","ref_id":5798},{"sent":"right slice","ref_id":5809},{"sent":"top left donut","ref_id":5810},{"sent":"white car","ref_id":5829},{"sent":"right white bus","ref_id":5830},{"sent":"white truck","ref_id":5831},{"sent":"donut on right","ref_id":5967},{"sent":"bottom right donut","ref_id":5968},{"sent":"right donut","ref_id":5969},{"sent":"donut on the right","ref_id":5970},{"sent":"left umbrella","ref_id":6053},{"sent":"middle banana","ref_id":6054},{"sent":"left bear","ref_id":6055},{"sent":"left UNK","ref_id":6056},{"sent":"left sheep","ref_id":6258},{"sent":"sheep in middle","ref_id":6259},{"sent":"right sheep","ref_id":6260},{"sent":"middle bowl","ref_id":6278},{"sent":"top left food","ref_id":6279},{"sent":"top left tray","ref_id":6280},{"sent":"top right bread","ref_id":6281},{"sent":"bottom left","ref_id":6282},{"sent":"right bread","ref_id":6283},{"sent":"bottom left bread","ref_id":6284},{"sent":"bottom left bowl","ref_id":6312},{"sent":"green apple","ref_id":6313},{"sent":"right clock","ref_id":6843},{"sent":"left clock","ref_id":6844},{"sent":"left bed","ref_id":6927},{"sent":"right bed","ref_id":6928},{"sent":"bottom left corner","ref_id":6929},{"sent":"bed","ref_id":6974},{"sent":"bed on right","ref_id":6975},{"sent":"top right broccoli","ref_id":7143},{"sent":"broccoli in front","ref_id":7144},{"sent":"broccoli on right","ref_id":7159},{"sent":"broccoli on left","ref_id":7160},{"sent":"UNK","ref_id":7183},{"sent":"UNK","ref_id":7184},{"sent":"left cow","ref_id":7252},{"sent":"right cow","ref_id":7253},{"sent":"bottom right corner","ref_id":7268},{"sent":"front bike","ref_id":7269},{"sent":"left suitcase","ref_id":7316},{"sent":"right suitcase","ref_id":7317},{"sent":"black suitcase","ref_id":7318},{"sent":"second suitcase from left","ref_id":7319},{"sent":"second bus from right","ref_id":7636},{"sent":"right bus","ref_id":7637},{"sent":"right duck","ref_id":7645},{"sent":"left duck","ref_id":7646},{"sent":"left duck","ref_id":7647},{"sent":"umbrella on left","ref_id":7801},{"sent":"right umbrella","ref_id":7802},{"sent":"broccoli on left","ref_id":7812},{"sent":"broccoli on the right","ref_id":7813},{"sent":"top bear","ref_id":7838},{"sent":"left bear","ref_id":7839},{"sent":"right bear","ref_id":7840},{"sent":"right horse","ref_id":7895},{"sent":"left dog","ref_id":7896},{"sent":"right slice","ref_id":7916},{"sent":"pizza slice","ref_id":7917},{"sent":"elephant on left","ref_id":8085},{"sent":"bear on left","ref_id":8128},{"sent":"left couch","ref_id":8210},{"sent":"right couch","ref_id":8211},{"sent":"right monitor","ref_id":8352},{"sent":"top left monitor","ref_id":8353},{"sent":"top left banana","ref_id":8380},{"sent":"banana in middle","ref_id":8381},{"sent":"banana on left","ref_id":8382},{"sent":"bowl of soup","ref_id":8649},{"sent":"bowl of food on left","ref_id":8650},{"sent":"white car","ref_id":8681},{"sent":"top left corner","ref_id":8729},{"sent":"glass","ref_id":8781},{"sent":"drink","ref_id":8782},{"sent":"left cow","ref_id":8788},{"sent":"left sheep","ref_id":8789},{"sent":"right sheep","ref_id":8790},{"sent":"top left sheep","ref_id":8806},{"sent":"sheep","ref_id":8807},{"sent":"top left donut","ref_id":8855},{"sent":"top left donut","ref_id":8856},{"sent":"donut on left","ref_id":8857},{"sent":"top right donut","ref_id":8858},{"sent":"front bike","ref_id":8909},{"sent":"left bike","ref_id":8910},{"sent":"right duck","ref_id":9214},{"sent":"left duck","ref_id":9215},{"sent":"top duck","ref_id":9216},{"sent":"right bus","ref_id":9233},{"sent":"bus in front","ref_id":9234},{"sent":"donut in front","ref_id":9295},{"sent":"donut on right","ref_id":9296},{"sent":"bottom right donut","ref_id":9305},{"sent":"bottom donut","ref_id":9306},{"sent":"donut in front","ref_id":9307},{"sent":"donut in front","ref_id":9308},{"sent":"left umbrella","ref_id":9423},{"sent":"left umbrella","ref_id":9424},{"sent":"top right umbrella","ref_id":9425},{"sent":"right umbrella","ref_id":9426},{"sent":"left plane","ref_id":9446},{"sent":"plane","ref_id":9447},{"sent":"right giraffe","ref_id":9560},{"sent":"left giraffe","ref_id":9561},{"sent":"hand","ref_id":9574},{"sent":"hand","ref_id":9575},{"sent":"bottom left apple","ref_id":9576},{"sent":"right giraffe","ref_id":9598},{"sent":"giraffe on left","ref_id":9599},{"sent":"black suitcase","ref_id":9628},{"sent":"elephant in front","ref_id":9629},{"sent":"elephant in front","ref_id":9630},{"sent":"elephant on right","ref_id":9631},{"sent":"right couch","ref_id":9707},{"sent":"couch","ref_id":9708},{"sent":"right horse","ref_id":9836},{"sent":"horse on left","ref_id":9837},{"sent":"horse on left","ref_id":9919},{"sent":"white horse","ref_id":9920},{"sent":"horse in front","ref_id":9921},{"sent":"bear on left","ref_id":10035},{"sent":"bear on left","ref_id":10036},{"sent":"right sandwich","ref_id":10110},{"sent":"left hot dog","ref_id":10111},{"sent":"elephant on right","ref_id":10239},{"sent":"elephant on right","ref_id":10240},{"sent":"elephant on left","ref_id":10241},{"sent":"left bike","ref_id":10380},{"sent":"motorcycle","ref_id":10381},{"sent":"white car","ref_id":10382},{"sent":"white car","ref_id":10383},{"sent":"right bird","ref_id":10601},{"sent":"left duck","ref_id":10602},{"sent":"red laptop","ref_id":10795},{"sent":"red and white UNK","ref_id":10796},{"sent":"cat on right","ref_id":10847},{"sent":"chair on left","ref_id":10907},{"sent":"chair in front of woman","ref_id":10908},{"sent":"orange on left","ref_id":11114},{"sent":"orange on top right","ref_id":11115},{"sent":"truck on right","ref_id":11131},{"sent":"truck on left","ref_id":11132},{"sent":"right piece of broccoli","ref_id":11192},{"sent":"right piece of food","ref_id":11193},{"sent":"glass on right","ref_id":11281},{"sent":"glass in front of wine glass","ref_id":11282},{"sent":"left couch","ref_id":11328},{"sent":"right black couch","ref_id":11329},{"sent":"left couch","ref_id":11330},{"sent":"right chair","ref_id":11331},{"sent":"bottom row second from left","ref_id":11982},{"sent":"bottom left donut","ref_id":11983},{"sent":"top row second from left","ref_id":11984},{"sent":"top row second from right","ref_id":11985},{"sent":"top right donut","ref_id":11986},{"sent":"top right donut","ref_id":11987},{"sent":"second row from right","ref_id":11988},{"sent":"top right donut","ref_id":11989},{"sent":"bottom left donut","ref_id":11990},{"sent":"middle row second from right","ref_id":11991},{"sent":"middle row second from right","ref_id":11992},{"sent":"bottom right donut","ref_id":11993},{"sent":"cow on right","ref_id":12041},{"sent":"cow on right","ref_id":12042},{"sent":"left cow","ref_id":12043},{"sent":"right bear","ref_id":12064},{"sent":"left bear","ref_id":12065},{"sent":"right bear","ref_id":12066},{"sent":"left bear","ref_id":12067},{"sent":"blue bike","ref_id":12106},{"sent":"front bike","ref_id":12107},{"sent":"middle row second from right","ref_id":12134},{"sent":"top left donut","ref_id":12135},{"sent":"middle row second from left","ref_id":12136},{"sent":"middle row second from left","ref_id":12137},{"sent":"bottom left donut","ref_id":12138},{"sent":"middle row","ref_id":12139},{"sent":"middle row second from right","ref_id":12140},{"sent":"right zebra","ref_id":12181},{"sent":"zebra on left","ref_id":12182},{"sent":"horse on right","ref_id":12239},{"sent":"horse on left","ref_id":12240},{"sent":"right","ref_id":12299},{"sent":"middle","ref_id":12300},{"sent":"the little girl","ref_id":12394},{"sent":"bottom right corner","ref_id":12395},{"sent":"left giraffe","ref_id":12407},{"sent":"right giraffe","ref_id":12408},{"sent":"left bus","ref_id":12421},{"sent":"right bus","ref_id":12422},{"sent":"right bus","ref_id":12423},{"sent":"left bike","ref_id":12464},{"sent":"bike on right","ref_id":12465},{"sent":"bike in front","ref_id":12466},{"sent":"bench","ref_id":12539},{"sent":"table","ref_id":12540},{"sent":"right boat","ref_id":12681},{"sent":"left boat","ref_id":12682},{"sent":"left cow","ref_id":12792},{"sent":"cow in front","ref_id":12793},{"sent":"right glass","ref_id":12899},{"sent":"left glass","ref_id":12900},{"sent":"second glass from left","ref_id":12901},{"sent":"middle glass","ref_id":12902},{"sent":"right giraffe","ref_id":12983},{"sent":"middle giraffe","ref_id":12984},{"sent":"right bus","ref_id":13072},{"sent":"left bus","ref_id":13073},{"sent":"top right bear","ref_id":13080},{"sent":"top bear","ref_id":13081},{"sent":"right chair","ref_id":13100},{"sent":"bottom right corner","ref_id":13101},{"sent":"cake in front","ref_id":13110},{"sent":"middle donut","ref_id":13111},{"sent":"bottom clock","ref_id":13188},{"sent":"clock on right","ref_id":13189},{"sent":"middle bear","ref_id":13298},{"sent":"left bear","ref_id":13299},{"sent":"bottom right dish","ref_id":13338},{"sent":"bottom left dish","ref_id":13339},{"sent":"bottom left bowl","ref_id":13340},{"sent":"bottom right bowl","ref_id":13341},{"sent":"bottom right bowl","ref_id":13342},{"sent":"top right bowl","ref_id":13343},{"sent":"top right container","ref_id":13344},{"sent":"bottom left bowl","ref_id":13345},{"sent":"white bird on right","ref_id":13370},{"sent":"middle duck","ref_id":13371},{"sent":"right bus","ref_id":13450},{"sent":"bus in front","ref_id":13451},{"sent":"motorcycle in front","ref_id":13459},{"sent":"motorcycle on left","ref_id":13460},{"sent":"white car on right","ref_id":13461},{"sent":"cow in front","ref_id":13504},{"sent":"right horse","ref_id":13505},{"sent":"cow on left","ref_id":13506},{"sent":"white car top left","ref_id":13511},{"sent":"blue bus","ref_id":13512},{"sent":"bus in middle","ref_id":13520},{"sent":"right bus","ref_id":13521},{"sent":"giraffe in front","ref_id":13658},{"sent":"giraffe on left","ref_id":13659},{"sent":"left monitor","ref_id":13708},{"sent":"couch on right","ref_id":13768},{"sent":"couch on left","ref_id":13769},{"sent":"left racket","ref_id":13819},{"sent":"right racket","ref_id":13820},{"sent":"hand on left","ref_id":13825},{"sent":"top left corner","ref_id":13826},{"sent":"top donut","ref_id":13827},{"sent":"left bus","ref_id":13830},{"sent":"bus on right","ref_id":13831},{"sent":"red car","ref_id":13832},{"sent":"chair on right","ref_id":13851},{"sent":"chair on left","ref_id":13852},{"sent":"banana on top","ref_id":13871},{"sent":"top banana","ref_id":13872},{"sent":"banana","ref_id":13873},{"sent":"top banana","ref_id":13874},{"sent":"bottom left sandwich","ref_id":14098},{"sent":"left sandwich","ref_id":14099},{"sent":"bananas in front","ref_id":14184},{"sent":"bananas","ref_id":14185},{"sent":"right zebra","ref_id":14283},{"sent":"zebra in the middle","ref_id":14284},{"sent":"top clock","ref_id":14470},{"sent":"bottom clock","ref_id":14471},{"sent":"right horse","ref_id":14509},{"sent":"left cow","ref_id":14510},{"sent":"red book","ref_id":14551},{"sent":"green book","ref_id":14552},{"sent":"zebra on right","ref_id":14652},{"sent":"zebra in front","ref_id":14653},{"sent":"right bird","ref_id":14665},{"sent":"left bird","ref_id":14666},{"sent":"orange","ref_id":14727},{"sent":"orange on right","ref_id":14728},{"sent":"orange on right","ref_id":14729},{"sent":"top right apple","ref_id":14730},{"sent":"left elephant","ref_id":14731},{"sent":"left elephant","ref_id":14732},{"sent":"elephant on right","ref_id":14733},{"sent":"right train","ref_id":14758},{"sent":"red train","ref_id":14759},{"sent":"broccoli","ref_id":14825},{"sent":"bottom left corner","ref_id":14826},{"sent":"left meter","ref_id":14846},{"sent":"right meter","ref_id":14847},{"sent":"middle fridge","ref_id":14848},{"sent":"left fridge","ref_id":14849},{"sent":"right fridge","ref_id":14850},{"sent":"red couch","ref_id":14942},{"sent":"right couch","ref_id":14943},{"sent":"bear on left","ref_id":14944},{"sent":"bear on the right","ref_id":14945},{"sent":"right giraffe","ref_id":14954},{"sent":"right giraffe","ref_id":14955},{"sent":"UNK","ref_id":15013},{"sent":"top cake","ref_id":15014},{"sent":"left sheep","ref_id":15022},{"sent":"white sheep","ref_id":15023},{"sent":"top right","ref_id":15024},{"sent":"top right sheep","ref_id":15025},{"sent":"sheep in front","ref_id":15026},{"sent":"top right broccoli","ref_id":15071},{"sent":"broccoli on right","ref_id":15072},{"sent":"broccoli in front","ref_id":15073},{"sent":"bike on right","ref_id":15135},{"sent":"bike on left","ref_id":15136},{"sent":"right bottle","ref_id":15241},{"sent":"right bottle","ref_id":15242},{"sent":"pizza on right","ref_id":15306},{"sent":"pizza on right","ref_id":15307},{"sent":"slice of pizza on left","ref_id":15308},{"sent":"top left slice","ref_id":15309},{"sent":"horse on left","ref_id":15310},{"sent":"horse on right","ref_id":15311},{"sent":"left bike","ref_id":15318},{"sent":"left bike","ref_id":15319},{"sent":"elephant on left","ref_id":15450},{"sent":"right elephant","ref_id":15451},{"sent":"elephant in front","ref_id":15452},{"sent":"elephant on left","ref_id":15469},{"sent":"right elephant","ref_id":15470},{"sent":"blue car","ref_id":15580},{"sent":"left car","ref_id":15581},{"sent":"pizza on right","ref_id":15686},{"sent":"pizza","ref_id":15687},{"sent":"chair on the left","ref_id":15741},{"sent":"left chair","ref_id":15742},{"sent":"bottom right","ref_id":15767},{"sent":"top hot dog","ref_id":15768},{"sent":"sandwich","ref_id":15877},{"sent":"left sandwich","ref_id":15878},{"sent":"left screen","ref_id":15975},{"sent":"left screen","ref_id":15976},{"sent":"right screen","ref_id":15977},{"sent":"bottom bench","ref_id":15984},{"sent":"right couch","ref_id":15985},{"sent":"left suitcase","ref_id":16044},{"sent":"second from left","ref_id":16045},{"sent":"red bus","ref_id":16049},{"sent":"bus on left","ref_id":16050},{"sent":"bus","ref_id":16051},{"sent":"second from right","ref_id":16238},{"sent":"second from left","ref_id":16239},{"sent":"bottom right corner","ref_id":16240},{"sent":"middle monitor","ref_id":16241},{"sent":"cow on right","ref_id":16348},{"sent":"white cow","ref_id":16349},{"sent":"cow on left","ref_id":16350},{"sent":"red bus on right","ref_id":16389},{"sent":"middle bus","ref_id":16390},{"sent":"left bear","ref_id":16394},{"sent":"bear on right","ref_id":16395},{"sent":"bear in middle","ref_id":16396},{"sent":"bottom left zebra","ref_id":16436},{"sent":"zebra in front","ref_id":16437},{"sent":"zebra on right","ref_id":16438},{"sent":"zebra in front","ref_id":16439},{"sent":"right side of cat","ref_id":16442},{"sent":"left one","ref_id":16465},{"sent":"chair on the right","ref_id":16477},{"sent":"right chair","ref_id":16478},{"sent":"chair on left","ref_id":16479},{"sent":"left chair","ref_id":16480},{"sent":"right giraffe","ref_id":16501},{"sent":"left giraffe","ref_id":16502},{"sent":"giraffe on left","ref_id":16503},{"sent":"left toothbrush","ref_id":16528},{"sent":"the UNK","ref_id":16529},{"sent":"screen","ref_id":16552},{"sent":"right book","ref_id":16553},{"sent":"bottom phone","ref_id":16554},{"sent":"right screen","ref_id":16555},{"sent":"right laptop","ref_id":16575},{"sent":"monitor on the right","ref_id":16576},{"sent":"white bus","ref_id":16622},{"sent":"yellow bus","ref_id":16623},{"sent":"right clock","ref_id":16642},{"sent":"clock on left","ref_id":16643},{"sent":"clock on right","ref_id":16644},{"sent":"bear in front","ref_id":16653},{"sent":"bear on right","ref_id":16654},{"sent":"woman","ref_id":16706},{"sent":"right half of person","ref_id":16707},{"sent":"bottom left corner","ref_id":16708},{"sent":"left side of table","ref_id":16709},{"sent":"bear on right","ref_id":16764},{"sent":"bear in middle","ref_id":16765},{"sent":"bear on left","ref_id":16766},{"sent":"right bottom corner","ref_id":16808},{"sent":"left bottom corner","ref_id":16809},{"sent":"right bottle","ref_id":16810},{"sent":"left bottle","ref_id":16811},{"sent":"second from right","ref_id":16812},{"sent":"second from right","ref_id":16813},{"sent":"second bottle from left","ref_id":16814},{"sent":"second bottle from left","ref_id":16815},{"sent":"second bottle from left","ref_id":16816},{"sent":"top dog","ref_id":16911},{"sent":"top hot dog","ref_id":16912},{"sent":"plant in front of the tree","ref_id":17021},{"sent":"green flowers","ref_id":17022},{"sent":"horse on right","ref_id":17140},{"sent":"horse on left","ref_id":17141},{"sent":"bottom left white cake","ref_id":17184},{"sent":"middle donut","ref_id":17185},{"sent":"right side of pic","ref_id":17301},{"sent":"bottom left corner","ref_id":17302},{"sent":"giraffe in back","ref_id":17382},{"sent":"right giraffe","ref_id":17383},{"sent":"white boat","ref_id":17427},{"sent":"left guy","ref_id":17428},{"sent":"boat on right","ref_id":17429},{"sent":"white boat","ref_id":17430},{"sent":"bed on left","ref_id":17585},{"sent":"bed","ref_id":17586},{"sent":"middle animal","ref_id":17806},{"sent":"chair in middle","ref_id":17822},{"sent":"chair on left","ref_id":17823},{"sent":"left bus","ref_id":17830},{"sent":"right bus","ref_id":17831},{"sent":"right bike","ref_id":17835},{"sent":"left bike","ref_id":17836},{"sent":"second bike from left","ref_id":17837},{"sent":"blue bike","ref_id":17838},{"sent":"red thing","ref_id":17894},{"sent":"red suitcase","ref_id":17895},{"sent":"cup on right","ref_id":18051},{"sent":"train on left","ref_id":18071},{"sent":"right train","ref_id":18072},{"sent":"kid on left","ref_id":18129},{"sent":"kid on right","ref_id":18130},{"sent":"right side of pic","ref_id":18131},{"sent":"white shirt","ref_id":18143},{"sent":"blond hair","ref_id":18144},{"sent":"right side of bike","ref_id":18171},{"sent":"left bike","ref_id":18172},{"sent":"right one","ref_id":18281},{"sent":"bird on left","ref_id":18282},{"sent":"pizza","ref_id":18295},{"sent":"top left bowl","ref_id":18296},{"sent":"right zebra","ref_id":18305},{"sent":"zebra in front","ref_id":18306},{"sent":"left elephant","ref_id":18443},{"sent":"baby elephant","ref_id":18444},{"sent":"left meter","ref_id":18462},{"sent":"right meter","ref_id":18463},{"sent":"left UNK","ref_id":18496},{"sent":"white book","ref_id":18497},{"sent":"giraffe in front","ref_id":18537},{"sent":"top giraffe","ref_id":18538},{"sent":"cat on left","ref_id":18681},{"sent":"cat on right","ref_id":18682},{"sent":"left sheep","ref_id":18698},{"sent":"left sheep","ref_id":18699},{"sent":"baby","ref_id":18700},{"sent":"right animal","ref_id":18726},{"sent":"right sheep","ref_id":18727},{"sent":"right slice","ref_id":18736},{"sent":"left pizza","ref_id":18737},{"sent":"giraffe on left","ref_id":18906},{"sent":"left giraffe","ref_id":18907},{"sent":"top left book","ref_id":18927},{"sent":"right horse","ref_id":19026},{"sent":"middle horse","ref_id":19027},{"sent":"top right bowl","ref_id":19032},{"sent":"bottom left bowl","ref_id":19033},{"sent":"bowl","ref_id":19034},{"sent":"bottom right bowl","ref_id":19035},{"sent":"oranges","ref_id":19125},{"sent":"oranges on left","ref_id":19126},{"sent":"orange in front","ref_id":19127},{"sent":"left side of the orange","ref_id":19249},{"sent":"white car on right","ref_id":19250},{"sent":"right book","ref_id":19533},{"sent":"bottom book","ref_id":19534},{"sent":"bottom book","ref_id":19535},{"sent":"UNK","ref_id":19536},{"sent":"bananas on right","ref_id":19589},{"sent":"left banana","ref_id":19590},{"sent":"chair on right","ref_id":19594},{"sent":"chair on right","ref_id":19595},{"sent":"left bear","ref_id":19626},{"sent":"right bear","ref_id":19627},{"sent":"left couch","ref_id":19651},{"sent":"bed on right","ref_id":19652},{"sent":"top left bowl","ref_id":19653},{"sent":"top right bowl","ref_id":19654},{"sent":"bottom left bowl","ref_id":19655},{"sent":"bottom right","ref_id":19656},{"sent":"chair on right","ref_id":19839},{"sent":"front bench","ref_id":19840},{"sent":"red boat","ref_id":19990},{"sent":"white plane on left","ref_id":19991},{"sent":"right train","ref_id":20030},{"sent":"train","ref_id":20031},{"sent":"left glass","ref_id":20032},{"sent":"middle glass","ref_id":20033},{"sent":"glass on right","ref_id":20034},{"sent":"middle animal","ref_id":20279},{"sent":"left cow","ref_id":20280},{"sent":"cow in middle","ref_id":20281},{"sent":"right dog","ref_id":20316},{"sent":"bottom right corner","ref_id":20317},{"sent":"black suitcase","ref_id":20318},{"sent":"top right car","ref_id":20331},{"sent":"red car","ref_id":20332},{"sent":"bottom suitcase","ref_id":20673},{"sent":"red suitcase","ref_id":20674},{"sent":"top suitcase","ref_id":20675},{"sent":"bottom suitcase","ref_id":20676},{"sent":"top right dog","ref_id":20733},{"sent":"cat on the right","ref_id":20734},{"sent":"car on left","ref_id":20793},{"sent":"car on the left","ref_id":20794},{"sent":"red car","ref_id":20865},{"sent":"white car","ref_id":20866},{"sent":"red","ref_id":20925},{"sent":"boat on right","ref_id":20926},{"sent":"bottom right corner","ref_id":20927},{"sent":"top right donut","ref_id":20981},{"sent":"right donut","ref_id":20982},{"sent":"white plate on the right","ref_id":21023},{"sent":"white plate on right","ref_id":21024},{"sent":"chair on right","ref_id":21053},{"sent":"couch","ref_id":21054},{"sent":"sandwich on left","ref_id":21162},{"sent":"sandwich on right","ref_id":21163},{"sent":"right couch","ref_id":21235},{"sent":"right bed","ref_id":21236},{"sent":"bottom right corner","ref_id":21256},{"sent":"bottom left UNK","ref_id":21257},{"sent":"bed on left","ref_id":21288},{"sent":"bed","ref_id":21289},{"sent":"bottom suitcase","ref_id":21638},{"sent":"left carrot","ref_id":21639},{"sent":"UNK","ref_id":21716},{"sent":"top right book","ref_id":21717},{"sent":"right bird","ref_id":21748},{"sent":"duck","ref_id":21749},{"sent":"chair on right","ref_id":21825},{"sent":"chair on left","ref_id":21826},{"sent":"glass on right","ref_id":21925},{"sent":"cup on right","ref_id":21926},{"sent":"left zebra","ref_id":21957},{"sent":"right zebra","ref_id":21958},{"sent":"horse on left","ref_id":22022},{"sent":"horse in front","ref_id":22023},{"sent":"left keyboard","ref_id":22040},{"sent":"keyboard on the right","ref_id":22041},{"sent":"black computer right","ref_id":22042},{"sent":"right monitor","ref_id":22043},{"sent":"top left pizza","ref_id":22050},{"sent":"right slice","ref_id":22051},{"sent":"middle piece of food","ref_id":22163},{"sent":"bottom right corner","ref_id":22164},{"sent":"table in front","ref_id":22289},{"sent":"bed in front","ref_id":22290},{"sent":"orange on top of orange","ref_id":22324},{"sent":"orange bottom left","ref_id":22325},{"sent":"white car","ref_id":22382},{"sent":"white car in back","ref_id":22383},{"sent":"white car","ref_id":22384},{"sent":"bottom right orange","ref_id":22434},{"sent":"bottom right corner","ref_id":22435},{"sent":"bottom orange","ref_id":22436},{"sent":"top left apple","ref_id":22437},{"sent":"bottom left apple","ref_id":22438},{"sent":"right train","ref_id":22473},{"sent":"left train","ref_id":22474},{"sent":"couch on left","ref_id":22576},{"sent":"right couch","ref_id":22577},{"sent":"right giraffe","ref_id":22596},{"sent":"left giraffe","ref_id":22597},{"sent":"right zebra","ref_id":22630},{"sent":"black bag on right","ref_id":22656},{"sent":"left blue bag","ref_id":22657},{"sent":"right seat","ref_id":22658},{"sent":"oranges in front","ref_id":22723},{"sent":"chair on right","ref_id":22754},{"sent":"bed on right","ref_id":22755},{"sent":"chair bottom right","ref_id":22756},{"sent":"left sheep","ref_id":22773},{"sent":"right sheep","ref_id":22774},{"sent":"right edge of pic","ref_id":22859},{"sent":"the woman in the middle","ref_id":22860},{"sent":"woman in middle","ref_id":22861},{"sent":"left vase","ref_id":22933},{"sent":"vase","ref_id":22934},{"sent":"left vase","ref_id":22935},{"sent":"blue car on left","ref_id":22943},{"sent":"car on right","ref_id":22944},{"sent":"black cat","ref_id":22945},{"sent":"red book","ref_id":22946},{"sent":"left monitor","ref_id":22966},{"sent":"right monitor","ref_id":22967},{"sent":"left bench","ref_id":23040},{"sent":"right couch","ref_id":23041},{"sent":"orange","ref_id":23088},{"sent":"orange top right","ref_id":23089},{"sent":"orange","ref_id":23090},{"sent":"orange","ref_id":23091},{"sent":"top right corner","ref_id":23092},{"sent":"top left apples","ref_id":23151},{"sent":"middle row second from right","ref_id":23152},{"sent":"middle row second from right","ref_id":23153},{"sent":"broccoli on right","ref_id":23182},{"sent":"broccoli on the right","ref_id":23183},{"sent":"broccoli in middle","ref_id":23184},{"sent":"middle row second from bottom","ref_id":23185},{"sent":"left banana","ref_id":23297},{"sent":"left hot dog","ref_id":23298},{"sent":"the UNK","ref_id":23313},{"sent":"top of train","ref_id":23314},{"sent":"right giraffe","ref_id":23347},{"sent":"baby","ref_id":23362},{"sent":"baby","ref_id":23363},{"sent":"right zebra","ref_id":23469},{"sent":"right zebra","ref_id":23470},{"sent":"left zebra","ref_id":23471},{"sent":"giraffe on left","ref_id":23509},{"sent":"right giraffe","ref_id":23510},{"sent":"left cow","ref_id":23569},{"sent":"cow in middle","ref_id":23570},{"sent":"cow in middle","ref_id":23571},{"sent":"chair on left","ref_id":23583},{"sent":"bottom right corner","ref_id":23584},{"sent":"left hotdog","ref_id":23603},{"sent":"top right corner","ref_id":23604},{"sent":"cat on left","ref_id":23659},{"sent":"cat on left","ref_id":23660},{"sent":"cat on the left","ref_id":23661},{"sent":"elephant on left","ref_id":23721},{"sent":"elephant on right","ref_id":23722},{"sent":"left horse","ref_id":23797},{"sent":"horse on right","ref_id":23798},{"sent":"bottle in middle","ref_id":23810},{"sent":"bottle on right","ref_id":23811},{"sent":"bottle on the left","ref_id":23812},{"sent":"bottle on left","ref_id":23813},{"sent":"bottle on left","ref_id":23814},{"sent":"top right piece of broccoli","ref_id":23878},{"sent":"left piece of food","ref_id":23879},{"sent":"broccoli on left","ref_id":23880},{"sent":"right piece of food","ref_id":23881},{"sent":"right","ref_id":23882},{"sent":"red thing","ref_id":23883},{"sent":"suitcase on the right","ref_id":24098},{"sent":"suitcase on left","ref_id":24099},{"sent":"left bear","ref_id":24120},{"sent":"right bear","ref_id":24121},{"sent":"right bear","ref_id":24122},{"sent":"right bear","ref_id":24123},{"sent":"left bear","ref_id":24124},{"sent":"baby elephant","ref_id":24187},{"sent":"chair in front of man","ref_id":24192},{"sent":"laptop on left","ref_id":24193},{"sent":"laptop on right","ref_id":24194},{"sent":"UNK","ref_id":24224},{"sent":"top right book","ref_id":24225},{"sent":"right most UNK","ref_id":24274},{"sent":"second glass from right","ref_id":24275},{"sent":"second glass from left","ref_id":24276},{"sent":"umbrella on left","ref_id":24402},{"sent":"top right umbrella","ref_id":24403},{"sent":"green bus on left","ref_id":24448},{"sent":"bus in front","ref_id":24449},{"sent":"second from right","ref_id":24503},{"sent":"second from left","ref_id":24504},{"sent":"left train","ref_id":24505},{"sent":"second from left","ref_id":24506},{"sent":"cat on the right","ref_id":24523},{"sent":"cat on the right","ref_id":24524},{"sent":"bottom left corner","ref_id":24573},{"sent":"right bottom corner","ref_id":24574},{"sent":"right horse","ref_id":24588},{"sent":"left horse","ref_id":24589},{"sent":"right umbrella","ref_id":24604},{"sent":"top right umbrella","ref_id":24605},{"sent":"umbrella on left","ref_id":24606},{"sent":"cow on right","ref_id":24684},{"sent":"cow on left","ref_id":24685},{"sent":"cow on right","ref_id":24686},{"sent":"bottom carrot","ref_id":24687},{"sent":"top left piece of food","ref_id":24688},{"sent":"top donut","ref_id":24778},{"sent":"top left hot dog","ref_id":24779},{"sent":"bike on right","ref_id":24859},{"sent":"bike","ref_id":24860},{"sent":"white UNK","ref_id":24943},{"sent":"white vase","ref_id":24944},{"sent":"white UNK","ref_id":24945},{"sent":"the UNK","ref_id":24946},{"sent":"right UNK","ref_id":25053},{"sent":"right meter","ref_id":25054},{"sent":"middle meter","ref_id":25055},{"sent":"bowl on left","ref_id":25137},{"sent":"dog on left","ref_id":25151},{"sent":"red dog","ref_id":25152},{"sent":"left monitor","ref_id":25302},{"sent":"right monitor","ref_id":25303},{"sent":"right most vase","ref_id":25313},{"sent":"vase on left","ref_id":25314},{"sent":"white book","ref_id":25336},{"sent":"horse on right","ref_id":25342},{"sent":"green stuff","ref_id":25445},{"sent":"green stuff","ref_id":25446},{"sent":"right couch","ref_id":25504},{"sent":"couch","ref_id":25505},{"sent":"bear in front","ref_id":25659},{"sent":"bear","ref_id":25660},{"sent":"white bear","ref_id":25694},{"sent":"bear","ref_id":25695},{"sent":"red bus","ref_id":25717},{"sent":"left bus","ref_id":25718},{"sent":"red bus on right","ref_id":25719},{"sent":"toilet in front","ref_id":25762},{"sent":"sink on the right","ref_id":25763},{"sent":"apple on left","ref_id":25788},{"sent":"right slice","ref_id":25789},{"sent":"glass on left","ref_id":25826},{"sent":"glass on right","ref_id":25827},{"sent":"right elephant","ref_id":25831},{"sent":"elephant on right","ref_id":25832},{"sent":"top right microwave","ref_id":25888},{"sent":"left monitor","ref_id":25889},{"sent":"broccoli on left","ref_id":26005},{"sent":"broccoli on right","ref_id":26006},{"sent":"broccoli in middle","ref_id":26007},{"sent":"bottom oven","ref_id":26157},{"sent":"bottom oven","ref_id":26158},{"sent":"keyboard","ref_id":26159},{"sent":"white keyboard","ref_id":26160},{"sent":"bottom left corner","ref_id":26344},{"sent":"left chair","ref_id":26345},{"sent":"couch","ref_id":26346},{"sent":"couch","ref_id":26347},{"sent":"left banana","ref_id":26384},{"sent":"banana in the back","ref_id":26385},{"sent":"boat in front","ref_id":26447},{"sent":"left boat","ref_id":26448},{"sent":"boat in front","ref_id":26449},{"sent":"middle boat","ref_id":26450},{"sent":"white truck","ref_id":26513},{"sent":"white truck","ref_id":26514},{"sent":"white truck","ref_id":26515},{"sent":"left bike","ref_id":26528},{"sent":"front bike","ref_id":26529},{"sent":"red chair","ref_id":26601},{"sent":"red chair","ref_id":26602},{"sent":"bird on right","ref_id":26618},{"sent":"bird on right","ref_id":26619},{"sent":"bear in front","ref_id":26825},{"sent":"right bear","ref_id":26826},{"sent":"left bus","ref_id":26844},{"sent":"bus in front","ref_id":26845},{"sent":"red light","ref_id":27005},{"sent":"traffic light","ref_id":27006},{"sent":"traffic light","ref_id":27007},{"sent":"middle giraffe","ref_id":27130},{"sent":"left woman","ref_id":27131},{"sent":"right bear","ref_id":27214},{"sent":"left bear","ref_id":27215},{"sent":"right cat","ref_id":27232},{"sent":"cat on left","ref_id":27233},{"sent":"zebra in front","ref_id":27247},{"sent":"right zebra","ref_id":27248},{"sent":"left zebra","ref_id":27249},{"sent":"right car","ref_id":27250},{"sent":"white car","ref_id":27251},{"sent":"top left microwave","ref_id":27288},{"sent":"microwave on right","ref_id":27289},{"sent":"toilet on left","ref_id":27314},{"sent":"toilet","ref_id":27315},{"sent":"sheep on right","ref_id":27373},{"sent":"sheep in front","ref_id":27374},{"sent":"left sandwich","ref_id":27432},{"sent":"right sandwich","ref_id":27433},{"sent":"cat on right","ref_id":27465},{"sent":"cat","ref_id":27466},{"sent":"yellow toothbrush","ref_id":27526},{"sent":"bottom brush","ref_id":27527},{"sent":"top right pizza","ref_id":27572},{"sent":"pizza","ref_id":27573},{"sent":"bottom left carrot","ref_id":27751},{"sent":"sandwich on right","ref_id":27796},{"sent":"sandwich on the left","ref_id":27797},{"sent":"left slice","ref_id":27848},{"sent":"right side of pizza","ref_id":27849},{"sent":"bottom menu","ref_id":27880},{"sent":"top book","ref_id":27881},{"sent":"book on right","ref_id":27882},{"sent":"keyboard on right","ref_id":27883},{"sent":"book in middle","ref_id":27935},{"sent":"UNK book","ref_id":27936},{"sent":"bowl of food","ref_id":27973},{"sent":"giraffe in front","ref_id":28408},{"sent":"left giraffe","ref_id":28409},{"sent":"left flower vase","ref_id":28439},{"sent":"right plant","ref_id":28440},{"sent":"left vase","ref_id":28441},{"sent":"right vase","ref_id":28442},{"sent":"train","ref_id":28852},{"sent":"top right corner","ref_id":28853},{"sent":"top right chair","ref_id":28854},{"sent":"cat on the right","ref_id":29103},{"sent":"cat on right","ref_id":29104},{"sent":"bottom left oven","ref_id":29105},{"sent":"right sink","ref_id":29106},{"sent":"middle giraffe","ref_id":29153},{"sent":"giraffe in front","ref_id":29154},{"sent":"dog in front","ref_id":29238},{"sent":"dog on left","ref_id":29239},{"sent":"front plate","ref_id":29270},{"sent":"right slice","ref_id":29271},{"sent":"baby","ref_id":29301},{"sent":"elephant","ref_id":29302},{"sent":"bottom right dish","ref_id":29360},{"sent":"bottom plate","ref_id":29361},{"sent":"giraffe on right","ref_id":29385},{"sent":"giraffe on right","ref_id":29386},{"sent":"giraffe in front","ref_id":29387},{"sent":"left giraffe","ref_id":29388},{"sent":"left giraffe","ref_id":29389},{"sent":"left bear","ref_id":29460},{"sent":"white bear","ref_id":29461},{"sent":"big bear","ref_id":29462},{"sent":"bowl of soup","ref_id":29569},{"sent":"white cup","ref_id":29570},{"sent":"white car on right","ref_id":29575},{"sent":"truck","ref_id":29576},{"sent":"second bike from left","ref_id":29625},{"sent":"second bike from left","ref_id":29626},{"sent":"bike on right","ref_id":29627},{"sent":"left train","ref_id":29630},{"sent":"left plane","ref_id":29631},{"sent":"left bench","ref_id":29856},{"sent":"bench in front","ref_id":29857},{"sent":"top right umbrella","ref_id":29920},{"sent":"top left umbrella","ref_id":29921},{"sent":"guy in back","ref_id":29964},{"sent":"bus","ref_id":29967},{"sent":"white car","ref_id":29974},{"sent":"car on left","ref_id":29975},{"sent":"middle screen","ref_id":30281},{"sent":"left monitor","ref_id":30282},{"sent":"elephant on left","ref_id":30393},{"sent":"right elephant","ref_id":30394},{"sent":"bed on left","ref_id":30401},{"sent":"bottom bed","ref_id":30402},{"sent":"left bed","ref_id":30403},{"sent":"plant on left","ref_id":30480},{"sent":"vase","ref_id":30481},{"sent":"left pot","ref_id":30482},{"sent":"cow in back","ref_id":30529},{"sent":"cow","ref_id":30530},{"sent":"bottom left suitcase","ref_id":30631},{"sent":"black suitcase","ref_id":30632},{"sent":"right suitcase","ref_id":30633},{"sent":"black suitcase","ref_id":30634},{"sent":"black cat","ref_id":30699},{"sent":"cat on left","ref_id":30700},{"sent":"bear on left","ref_id":30701},{"sent":"bear","ref_id":30702},{"sent":"motorcycle on right","ref_id":30719},{"sent":"front bike","ref_id":30720},{"sent":"front left bike","ref_id":30721},{"sent":"right dish","ref_id":30813},{"sent":"left bowl","ref_id":30814},{"sent":"cat on left","ref_id":30839},{"sent":"cat on the left","ref_id":30840},{"sent":"truck","ref_id":30869},{"sent":"white truck","ref_id":30870},{"sent":"glass with red liquid","ref_id":30970},{"sent":"glass on right","ref_id":30971},{"sent":"right meter","ref_id":30996},{"sent":"left meter","ref_id":30997},{"sent":"right screen","ref_id":31025},{"sent":"left monitor","ref_id":31026},{"sent":"giraffe in back","ref_id":31114},{"sent":"right giraffe","ref_id":31115},{"sent":"left sheep","ref_id":31161},{"sent":"right sheep","ref_id":31162},{"sent":"sandwich on right","ref_id":31324},{"sent":"sandwich on left","ref_id":31325},{"sent":"cup on right","ref_id":31373},{"sent":"top right cup","ref_id":31374},{"sent":"bowl of food on left","ref_id":31375},{"sent":"cup of coffee","ref_id":31376},{"sent":"cup on right","ref_id":31377},{"sent":"bowl of UNK","ref_id":31378},{"sent":"horse on left","ref_id":31391},{"sent":"horse on right","ref_id":31392},{"sent":"sheep on right","ref_id":31393},{"sent":"sheep in back","ref_id":31394},{"sent":"sheep on right","ref_id":31395},{"sent":"bottom right corner","ref_id":31396},{"sent":"bottom left sheep","ref_id":31397},{"sent":"bus on right","ref_id":31558},{"sent":"bus in front","ref_id":31559},{"sent":"left bus","ref_id":31560},{"sent":"front vase","ref_id":31579},{"sent":"vase on left","ref_id":31580},{"sent":"right vase","ref_id":31581},{"sent":"bike on right","ref_id":31594},{"sent":"red bike","ref_id":31595},{"sent":"red bike","ref_id":31596},{"sent":"truck on right","ref_id":31619},{"sent":"truck","ref_id":31620},{"sent":"right sandwich","ref_id":31687},{"sent":"left sandwich","ref_id":31688},{"sent":"white thing on right","ref_id":31703},{"sent":"horse on left","ref_id":31706},{"sent":"giraffe in middle","ref_id":31729},{"sent":"giraffe in front","ref_id":31730},{"sent":"right sheep","ref_id":31736},{"sent":"left sheep","ref_id":31737},{"sent":"right animal","ref_id":31758},{"sent":"left sheep","ref_id":31759},{"sent":"meter on the right","ref_id":31778},{"sent":"right meter","ref_id":31779},{"sent":"sheep in front","ref_id":31897},{"sent":"sheep in front","ref_id":31898},{"sent":"right sheep","ref_id":31899},{"sent":"left donut","ref_id":31960},{"sent":"right donut","ref_id":31961},{"sent":"umbrella on left","ref_id":31981},{"sent":"umbrella","ref_id":31982},{"sent":"elephant on left","ref_id":32094},{"sent":"elephant on right","ref_id":32095},{"sent":"right sandwich","ref_id":32165},{"sent":"left hot dog","ref_id":32166},{"sent":"slice of pizza","ref_id":32214},{"sent":"top oven","ref_id":32265},{"sent":"stove","ref_id":32266},{"sent":"motorcycle in front","ref_id":32311},{"sent":"front bike","ref_id":32312},{"sent":"left sandwich","ref_id":32362},{"sent":"bottom left food","ref_id":32363},{"sent":"right sandwich","ref_id":32364},{"sent":"bottom left bread","ref_id":32365},{"sent":"hand","ref_id":32370},{"sent":"hand","ref_id":32371},{"sent":"right screen","ref_id":32572},{"sent":"left monitor","ref_id":32573},{"sent":"dog on right","ref_id":32642},{"sent":"dog on left","ref_id":32643},{"sent":"zebra in back","ref_id":32928},{"sent":"zebra in front","ref_id":32929},{"sent":"left glass","ref_id":32956},{"sent":"person in back","ref_id":32957},{"sent":"pizza slice on top","ref_id":33014},{"sent":"pizza slice on right","ref_id":33015},{"sent":"pizza slice on right","ref_id":33016},{"sent":"bottom left pizza","ref_id":33017},{"sent":"top pizza","ref_id":33018},{"sent":"left UNK","ref_id":33237},{"sent":"bottom right corner","ref_id":33238},{"sent":"the little UNK","ref_id":33239},{"sent":"red vase","ref_id":33240},{"sent":"the little UNK","ref_id":33241},{"sent":"left vase","ref_id":33242},{"sent":"cat on right","ref_id":33291},{"sent":"left cat","ref_id":33292},{"sent":"zebra in front","ref_id":33439},{"sent":"right light","ref_id":33455},{"sent":"pizza on right","ref_id":33470},{"sent":"bottom left slice","ref_id":33471},{"sent":"right elephant","ref_id":33500},{"sent":"elephant on left","ref_id":33501},{"sent":"bottom donut","ref_id":33626},{"sent":"donut on right","ref_id":33627},{"sent":"donut on left","ref_id":33628},{"sent":"zebra on left","ref_id":33639},{"sent":"right train","ref_id":33681},{"sent":"left train","ref_id":33682},{"sent":"right bus","ref_id":33683},{"sent":"chair on right","ref_id":33684},{"sent":"top right dog","ref_id":33685},{"sent":"cat on left","ref_id":33686},{"sent":"red bike","ref_id":33714},{"sent":"front bike","ref_id":33715},{"sent":"bottom left cup","ref_id":33800},{"sent":"cup","ref_id":33801},{"sent":"elephant in front","ref_id":33806},{"sent":"elephant on right","ref_id":33807},{"sent":"bottom left bowl","ref_id":33829},{"sent":"bottom left cup","ref_id":33830},{"sent":"bowl of rice in back right","ref_id":33831},{"sent":"broccoli on left","ref_id":33914},{"sent":"bottom left broccoli","ref_id":33915},{"sent":"middle donut","ref_id":33952},{"sent":"middle donut","ref_id":33953},{"sent":"second from right","ref_id":33992},{"sent":"second from right","ref_id":33993},{"sent":"second from right","ref_id":33994},{"sent":"right carrot","ref_id":33995},{"sent":"right side of food","ref_id":33996},{"sent":"right most carrot","ref_id":33997},{"sent":"UNK","ref_id":34321},{"sent":"glass of water","ref_id":34322},{"sent":"elephant on right","ref_id":34631},{"sent":"elephant in front","ref_id":34632},{"sent":"top left food","ref_id":34787},{"sent":"top left food","ref_id":34788},{"sent":"left umbrella","ref_id":34858},{"sent":"left umbrella","ref_id":34859},{"sent":"bottom right bowl","ref_id":34895},{"sent":"car in front of the cart","ref_id":34943},{"sent":"car on left","ref_id":34944},{"sent":"pizza on right","ref_id":34998},{"sent":"pizza slice on left","ref_id":34999},{"sent":"UNK","ref_id":35034},{"sent":"left monitor","ref_id":35035},{"sent":"left keyboard","ref_id":35090},{"sent":"laptop on left","ref_id":35091},{"sent":"top left laptop","ref_id":35121},{"sent":"top left chair","ref_id":35122},{"sent":"chair on right","ref_id":35148},{"sent":"couch on right","ref_id":35149},{"sent":"broccoli on right","ref_id":35188},{"sent":"broccoli on the right","ref_id":35189},{"sent":"broccoli on left","ref_id":35190},{"sent":"broccoli in middle","ref_id":35191},{"sent":"middle bird","ref_id":35194},{"sent":"left cat","ref_id":35195},{"sent":"bird on left","ref_id":35217},{"sent":"bird on the right","ref_id":35218},{"sent":"black bag on left","ref_id":35368},{"sent":"black bag on top of suitcase","ref_id":35369},{"sent":"right suitcase","ref_id":35370},{"sent":"right suitcase","ref_id":35371},{"sent":"black suitcase on top of suitcase","ref_id":35372},{"sent":"top right orange","ref_id":35377},{"sent":"middle row second from right","ref_id":35378},{"sent":"second from left","ref_id":35379},{"sent":"stove top right","ref_id":35391},{"sent":"oven","ref_id":35392},{"sent":"left girl","ref_id":35420},{"sent":"chair on right","ref_id":35421},{"sent":"top right apple","ref_id":35522},{"sent":"orange","ref_id":35523},{"sent":"orange bottom right","ref_id":35524},{"sent":"bottom right apple","ref_id":35525},{"sent":"top apple","ref_id":35526},{"sent":"right suitcase","ref_id":35764},{"sent":"right bag","ref_id":35765},{"sent":"left couch","ref_id":35833},{"sent":"right couch","ref_id":35834},{"sent":"right zebra","ref_id":35913},{"sent":"left zebra","ref_id":35914},{"sent":"green plant","ref_id":36001},{"sent":"bottom right corner","ref_id":36002},{"sent":"cat on right","ref_id":36082},{"sent":"cat","ref_id":36083},{"sent":"white truck","ref_id":36111},{"sent":"white truck","ref_id":36112},{"sent":"right zebra","ref_id":36209},{"sent":"left zebra","ref_id":36210},{"sent":"bottom right food","ref_id":36224},{"sent":"right pizza","ref_id":36225},{"sent":"top right banana","ref_id":36279},{"sent":"bottom left apple","ref_id":36280},{"sent":"banana in middle","ref_id":36281},{"sent":"UNK","ref_id":36365},{"sent":"top left umbrella","ref_id":36366},{"sent":"bottom right microwave","ref_id":36432},{"sent":"bottom right corner","ref_id":36433},{"sent":"left monitor","ref_id":36434},{"sent":"top right microwave","ref_id":36435},{"sent":"left microwave","ref_id":36436},{"sent":"sandwich","ref_id":36689},{"sent":"top sandwich","ref_id":36690},{"sent":"bottom left orange","ref_id":36725},{"sent":"orange peel","ref_id":36726},{"sent":"right glass","ref_id":36762},{"sent":"right glass","ref_id":36763},{"sent":"glass on left","ref_id":36764},{"sent":"wine bottle on right","ref_id":36789},{"sent":"left glass","ref_id":36790},{"sent":"right giraffe","ref_id":36894},{"sent":"left giraffe","ref_id":36895},{"sent":"red sweater","ref_id":36900},{"sent":"bear on right","ref_id":36901},{"sent":"green and white UNK","ref_id":36902},{"sent":"bear on right","ref_id":36903},{"sent":"right animal","ref_id":37150},{"sent":"right animal","ref_id":37151},{"sent":"chair in middle","ref_id":37201},{"sent":"right chair","ref_id":37202},{"sent":"left chair","ref_id":37203},{"sent":"white plane","ref_id":37213},{"sent":"plane in front","ref_id":37214},{"sent":"bottom left corner","ref_id":37252},{"sent":"vase on right","ref_id":37253},{"sent":"right vase","ref_id":37254},{"sent":"UNK","ref_id":37255},{"sent":"bananas on left","ref_id":37278},{"sent":"right bunch","ref_id":37279},{"sent":"slice of pizza in front","ref_id":37540},{"sent":"pizza","ref_id":37541},{"sent":"white umbrella","ref_id":37572},{"sent":"bottom right corner","ref_id":37573},{"sent":"left bowl","ref_id":37650},{"sent":"broccoli","ref_id":37651},{"sent":"top right bowl","ref_id":37652},{"sent":"bowl of UNK","ref_id":37661},{"sent":"bowl of UNK","ref_id":37662},{"sent":"bowl of rice","ref_id":37663},{"sent":"black suitcase on left","ref_id":37710},{"sent":"suitcase on left","ref_id":37711},{"sent":"right suitcase","ref_id":37712},{"sent":"zebra on left","ref_id":37749},{"sent":"zebra in back","ref_id":37750},{"sent":"bottom left corner","ref_id":37802},{"sent":"middle cake","ref_id":37803},{"sent":"bed on right","ref_id":37879},{"sent":"couch on left","ref_id":37880},{"sent":"couch on left","ref_id":37895},{"sent":"couch","ref_id":37896},{"sent":"bottom right corner","ref_id":37933},{"sent":"cow in front","ref_id":37963},{"sent":"black cow","ref_id":37964},{"sent":"bottom phone","ref_id":38029},{"sent":"right phone","ref_id":38030},{"sent":"banana on top","ref_id":38151},{"sent":"top left sandwich","ref_id":38266},{"sent":"sandwich","ref_id":38267},{"sent":"food in front","ref_id":38268},{"sent":"second from right","ref_id":38333},{"sent":"second board from right","ref_id":38334},{"sent":"left board","ref_id":38335},{"sent":"right","ref_id":38368},{"sent":"bottom left","ref_id":38369},{"sent":"top left dish","ref_id":38370},{"sent":"carrots","ref_id":38371},{"sent":"right bear","ref_id":38388},{"sent":"left bear","ref_id":38389},{"sent":"top dog","ref_id":38567},{"sent":"dog","ref_id":38568},{"sent":"clock face","ref_id":38601},{"sent":"clock on left","ref_id":38602},{"sent":"clock on right","ref_id":38603},{"sent":"white plane","ref_id":38647},{"sent":"bed on right","ref_id":38648},{"sent":"bed","ref_id":38649},{"sent":"top right corner","ref_id":38720},{"sent":"right bottom corner","ref_id":38721},{"sent":"top dog","ref_id":38779},{"sent":"left dog","ref_id":38780},{"sent":"top right dog","ref_id":38781},{"sent":"right glass","ref_id":38783},{"sent":"chair on right","ref_id":38903},{"sent":"red shirt","ref_id":38919},{"sent":"top right corner","ref_id":38920},{"sent":"right","ref_id":39016},{"sent":"left vase","ref_id":39017},{"sent":"right bottle","ref_id":39018},{"sent":"left elephant","ref_id":39024},{"sent":"elephant on right","ref_id":39025},{"sent":"left elephant","ref_id":39026},{"sent":"giraffe in middle","ref_id":39045},{"sent":"left train","ref_id":39150},{"sent":"front train","ref_id":39151},{"sent":"cat on the left","ref_id":39159},{"sent":"top cat","ref_id":39160},{"sent":"left suitcase","ref_id":39206},{"sent":"right","ref_id":39207},{"sent":"giraffe in front","ref_id":39387},{"sent":"giraffe in front","ref_id":39388},{"sent":"white teddy bear on right","ref_id":39400},{"sent":"teddy bear in middle","ref_id":39401},{"sent":"bear on left","ref_id":39402},{"sent":"bottom right bear","ref_id":39403},{"sent":"bear on left","ref_id":39404},{"sent":"bear on right","ref_id":39405},{"sent":"left elephant","ref_id":39448},{"sent":"elephant in front","ref_id":39449},{"sent":"cow on left","ref_id":39456},{"sent":"cow on right","ref_id":39457},{"sent":"right sandwich","ref_id":39460},{"sent":"sandwich on left","ref_id":39461},{"sent":"left chair","ref_id":39765},{"sent":"chair on left","ref_id":39766},{"sent":"left cow","ref_id":39797},{"sent":"cow on right","ref_id":39798},{"sent":"green apple","ref_id":39815},{"sent":"green apple","ref_id":39816},{"sent":"green apple","ref_id":39817},{"sent":"right zebra","ref_id":39850},{"sent":"zebra in front","ref_id":39851},{"sent":"right pizza","ref_id":39883},{"sent":"pizza in front","ref_id":39884},{"sent":"pizza on left","ref_id":39885},{"sent":"right laptop","ref_id":39891},{"sent":"left laptop","ref_id":39892},{"sent":"left truck","ref_id":39963},{"sent":"red truck","ref_id":39964},{"sent":"top left orange","ref_id":39965},{"sent":"orange","ref_id":39966},{"sent":"cat on left","ref_id":40063},{"sent":"cat on right","ref_id":40064},{"sent":"bed","ref_id":40188},{"sent":"bed on left","ref_id":40189},{"sent":"top right microwave","ref_id":40211},{"sent":"middle UNK","ref_id":40212},{"sent":"right phone","ref_id":40213},{"sent":"left cake","ref_id":40235},{"sent":"left cake","ref_id":40236},{"sent":"right cake","ref_id":40237},{"sent":"bottom sandwich","ref_id":40238},{"sent":"left most seat","ref_id":40248},{"sent":"left suitcase","ref_id":40249},{"sent":"bottom right bowl","ref_id":40287},{"sent":"top right bowl","ref_id":40288},{"sent":"table","ref_id":40350},{"sent":"table behind the table","ref_id":40351},{"sent":"left person","ref_id":40358},{"sent":"cat on right","ref_id":40400},{"sent":"left cat","ref_id":40401},{"sent":"the plant","ref_id":40456},{"sent":"left bunch","ref_id":40457},{"sent":"right dog","ref_id":40458},{"sent":"dog on left","ref_id":40459},{"sent":"left bike","ref_id":40479},{"sent":"front bike","ref_id":40480},{"sent":"bike on right","ref_id":40500},{"sent":"bike in front","ref_id":40501},{"sent":"bike on left","ref_id":40502},{"sent":"top bowl","ref_id":40554},{"sent":"right sandwich","ref_id":40555},{"sent":"cow in front","ref_id":40571},{"sent":"cow on right","ref_id":40572},{"sent":"left meter","ref_id":40753},{"sent":"left car","ref_id":40754},{"sent":"bottom left orange","ref_id":40762},{"sent":"orange","ref_id":40763},{"sent":"left giraffe","ref_id":40804},{"sent":"giraffe on right","ref_id":40805},{"sent":"bottom left fruit","ref_id":40810},{"sent":"banana slice in the middle","ref_id":40811},{"sent":"banana on right","ref_id":40812},{"sent":"toilet","ref_id":40909},{"sent":"toilet","ref_id":40910},{"sent":"second from right","ref_id":40945},{"sent":"second row from right","ref_id":40946},{"sent":"second from right","ref_id":40947},{"sent":"second banana from left","ref_id":40948},{"sent":"second row from left","ref_id":40949},{"sent":"left elephant","ref_id":41094},{"sent":"baby elephant","ref_id":41095},{"sent":"right cow","ref_id":41136},{"sent":"giraffe in front","ref_id":41137},{"sent":"chair on right","ref_id":41148},{"sent":"bed","ref_id":41167},{"sent":"right bed","ref_id":41168},{"sent":"left screen","ref_id":41173},{"sent":"right monitor","ref_id":41174},{"sent":"left white chair","ref_id":41197},{"sent":"chair on right","ref_id":41198},{"sent":"UNK","ref_id":41209},{"sent":"blue UNK","ref_id":41351},{"sent":"right giraffe","ref_id":41359},{"sent":"giraffe on left","ref_id":41360},{"sent":"right bed","ref_id":41531},{"sent":"red bed","ref_id":41532},{"sent":"left giraffe","ref_id":41551},{"sent":"right giraffe","ref_id":41552},{"sent":"sheep in back","ref_id":41743},{"sent":"sheep in front","ref_id":41744},{"sent":"broccoli on left","ref_id":41795},{"sent":"broccoli on right","ref_id":41796},{"sent":"broccoli on right","ref_id":41797},{"sent":"broccoli on left","ref_id":41798},{"sent":"broccoli on left","ref_id":41799},{"sent":"bottom right food","ref_id":41805},{"sent":"left bowl","ref_id":41806},{"sent":"white bowl on right","ref_id":41807},{"sent":"right most food","ref_id":41808},{"sent":"bowl of food on left","ref_id":41809},{"sent":"sheep in back","ref_id":41877},{"sent":"right sheep","ref_id":41878},{"sent":"cat on right","ref_id":41938},{"sent":"cat on left","ref_id":41939},{"sent":"left horse","ref_id":42136},{"sent":"left horse","ref_id":42137},{"sent":"bird","ref_id":42284},{"sent":"bird","ref_id":42285},{"sent":"left bird","ref_id":42296},{"sent":"duck","ref_id":42297},{"sent":"red surfboard","ref_id":42329},{"sent":"white boat","ref_id":42330},{"sent":"bottom left toilet","ref_id":42354},{"sent":"white toilet","ref_id":42355},{"sent":"toilet on left","ref_id":42356},{"sent":"toilet on right","ref_id":42357},{"sent":"toilet on left","ref_id":42358},{"sent":"toilet on left","ref_id":42359},{"sent":"bananas on left","ref_id":42428},{"sent":"banana bunch","ref_id":42429},{"sent":"bowl of food on right","ref_id":42658},{"sent":"bowl of food","ref_id":42659},{"sent":"bowl of food","ref_id":42660},{"sent":"left donut","ref_id":42697},{"sent":"bottom right donut","ref_id":42698},{"sent":"train","ref_id":42839},{"sent":"left bird","ref_id":42922},{"sent":"bird on right","ref_id":42923},{"sent":"zebra on left","ref_id":42928},{"sent":"zebra on right","ref_id":42929},{"sent":"zebra in back","ref_id":42930},{"sent":"sandwich on top","ref_id":43160},{"sent":"top left sandwich","ref_id":43161},{"sent":"cake","ref_id":43210},{"sent":"bottom left suitcase","ref_id":43211},{"sent":"bird on the left","ref_id":43396},{"sent":"plate of food","ref_id":43446},{"sent":"top bowl","ref_id":43447},{"sent":"plate with lettuce","ref_id":43448},{"sent":"sandwich on right","ref_id":43449},{"sent":"right sandwich","ref_id":43480},{"sent":"sandwich on left","ref_id":43481},{"sent":"toilet","ref_id":43581},{"sent":"toilet","ref_id":43582},{"sent":"cat on right","ref_id":43596},{"sent":"black chair bottom right","ref_id":43597},{"sent":"giraffe on right","ref_id":43598},{"sent":"left giraffe","ref_id":43599},{"sent":"right sandwich","ref_id":43700},{"sent":"sandwich on left","ref_id":43701},{"sent":"red truck","ref_id":43784},{"sent":"truck","ref_id":43785},{"sent":"banana on right","ref_id":43815},{"sent":"right banana","ref_id":43816},{"sent":"bottom right corner","ref_id":43817},{"sent":"banana on right","ref_id":43818},{"sent":"middle banana","ref_id":43819},{"sent":"cow on left","ref_id":43940},{"sent":"top cow","ref_id":43941},{"sent":"black suitcase on right","ref_id":43987},{"sent":"black suitcase on right","ref_id":43988},{"sent":"red suitcase","ref_id":43989},{"sent":"black suitcase in middle","ref_id":43990},{"sent":"baby","ref_id":44146},{"sent":"teddy bear","ref_id":44147},{"sent":"bottom left food","ref_id":44160},{"sent":"right plate","ref_id":44161},{"sent":"bottom tray","ref_id":44162},{"sent":"top plate","ref_id":44163},{"sent":"right bed","ref_id":44228},{"sent":"bed on left","ref_id":44229},{"sent":"bottom left bowl","ref_id":44296},{"sent":"bottom banana","ref_id":44297},{"sent":"front giraffe","ref_id":44300},{"sent":"giraffe on right","ref_id":44301},{"sent":"plane in front","ref_id":44369},{"sent":"plane on left","ref_id":44370},{"sent":"plane in front","ref_id":44408},{"sent":"plane in front","ref_id":44409},{"sent":"top right cake","ref_id":44426},{"sent":"left half of sandwich","ref_id":44427},{"sent":"left sandwich","ref_id":44428},{"sent":"right half of sandwich","ref_id":44429},{"sent":"bottom left bread","ref_id":44430},{"sent":"bottom left cake","ref_id":44431},{"sent":"right half of sandwich","ref_id":44432},{"sent":"white toilet","ref_id":44454},{"sent":"left sheep","ref_id":44463},{"sent":"sheep in front","ref_id":44464},{"sent":"sandwich on right","ref_id":44467},{"sent":"left sandwich","ref_id":44468},{"sent":"right car","ref_id":44514},{"sent":"car on left","ref_id":44515},{"sent":"yellow cab","ref_id":44516},{"sent":"right toilet","ref_id":44573},{"sent":"toilet on left","ref_id":44574},{"sent":"bottom right bear","ref_id":44661},{"sent":"bear on left","ref_id":44662},{"sent":"bear on right","ref_id":44663},{"sent":"right meter","ref_id":44814},{"sent":"left meter","ref_id":44815},{"sent":"red sauce","ref_id":44856},{"sent":"glass on left","ref_id":44857},{"sent":"bear on right","ref_id":44884},{"sent":"brown bear","ref_id":44885},{"sent":"bear on left","ref_id":44886},{"sent":"bear on right","ref_id":44887},{"sent":"teddy bear on left","ref_id":44888},{"sent":"left meter","ref_id":44941},{"sent":"right meter","ref_id":44942},{"sent":"UNK","ref_id":44989},{"sent":"top right corner","ref_id":44990},{"sent":"bed","ref_id":45056},{"sent":"bottom left suitcase","ref_id":45057},{"sent":"blue umbrella","ref_id":45127},{"sent":"left blue vase","ref_id":45128},{"sent":"blue umbrella","ref_id":45129},{"sent":"left blue umbrella","ref_id":45130},{"sent":"bottom right corner","ref_id":45131},{"sent":"glass of the beer","ref_id":45256},{"sent":"right side of table","ref_id":45257},{"sent":"white bus","ref_id":45498},{"sent":"bus in middle","ref_id":45499},{"sent":"right bunch of bananas","ref_id":45503},{"sent":"bananas on right","ref_id":45504},{"sent":"bananas on the left","ref_id":45505},{"sent":"elephant on right","ref_id":45533},{"sent":"elephant on left","ref_id":45534},{"sent":"elephant in front","ref_id":45535},{"sent":"left zebra","ref_id":45645},{"sent":"right zebra","ref_id":45646},{"sent":"middle row","ref_id":45680},{"sent":"bottom right cake","ref_id":45681},{"sent":"white car in back","ref_id":45957},{"sent":"train","ref_id":45958},{"sent":"pizza","ref_id":45972},{"sent":"pizza slice","ref_id":45973},{"sent":"table in front","ref_id":45974},{"sent":"top right corner","ref_id":45975},{"sent":"sandwich on left","ref_id":45976},{"sent":"right half of sandwich","ref_id":45977},{"sent":"blue thing","ref_id":45984},{"sent":"blue thing","ref_id":45985},{"sent":"chair behind dog","ref_id":46007},{"sent":"chair on right","ref_id":46008},{"sent":"donut on right","ref_id":46423},{"sent":"bottom row second from left","ref_id":46439},{"sent":"bottom row second from left","ref_id":46440},{"sent":"second row from bottom right","ref_id":46441},{"sent":"right giraffe","ref_id":46476},{"sent":"giraffe on left","ref_id":46477},{"sent":"truck on right","ref_id":46501},{"sent":"white truck","ref_id":46502},{"sent":"white truck","ref_id":46503},{"sent":"right elephant","ref_id":46569},{"sent":"middle elephant","ref_id":46570},{"sent":"right zebra","ref_id":46668},{"sent":"zebra in front","ref_id":46669},{"sent":"top pizza","ref_id":46684},{"sent":"pizza","ref_id":46685},{"sent":"car in front","ref_id":46724},{"sent":"car in front","ref_id":46725},{"sent":"sheep in front","ref_id":46744},{"sent":"bottom right bowl","ref_id":46773},{"sent":"bowl of food on left","ref_id":46774},{"sent":"middle bowl","ref_id":46775},{"sent":"left pizza","ref_id":46796},{"sent":"pizza on right","ref_id":46797},{"sent":"pizza on the right","ref_id":46798},{"sent":"bear on left","ref_id":46817},{"sent":"right bear","ref_id":46818},{"sent":"right plant","ref_id":46965},{"sent":"left UNK","ref_id":46966},{"sent":"left suitcase","ref_id":46986},{"sent":"right suitcase","ref_id":46987},{"sent":"right suitcase","ref_id":46988},{"sent":"right suitcase","ref_id":46989},{"sent":"suitcase in middle","ref_id":46990},{"sent":"top carrot","ref_id":47273},{"sent":"top carrot","ref_id":47274},{"sent":"top carrot","ref_id":47275},{"sent":"top carrot","ref_id":47276},{"sent":"bottom left carrot","ref_id":47277},{"sent":"right giraffe","ref_id":47305},{"sent":"left giraffe","ref_id":47306},{"sent":"bowl of food","ref_id":47310},{"sent":"bowl of rice","ref_id":47311},{"sent":"top right bowl","ref_id":47312},{"sent":"train on right","ref_id":47313},{"sent":"train on right","ref_id":47314},{"sent":"train on left","ref_id":47315},{"sent":"chair on left","ref_id":47318},{"sent":"right giraffe","ref_id":47366},{"sent":"left zebra","ref_id":47367},{"sent":"white cow","ref_id":47450},{"sent":"big cow","ref_id":47451},{"sent":"white sheep on right","ref_id":47452},{"sent":"white sheep on right","ref_id":47453},{"sent":"top right sheep","ref_id":47454},{"sent":"white cow","ref_id":47455},{"sent":"top left suitcase","ref_id":47529},{"sent":"white boat on left","ref_id":47530},{"sent":"white boat","ref_id":47531},{"sent":"left giraffe","ref_id":47603},{"sent":"right giraffe","ref_id":47604},{"sent":"white plate","ref_id":47644},{"sent":"top left food","ref_id":47645},{"sent":"bear on right","ref_id":47740},{"sent":"bear on right","ref_id":47741},{"sent":"bear in middle","ref_id":47742},{"sent":"bear on right","ref_id":47743},{"sent":"bed on left","ref_id":47840},{"sent":"bed","ref_id":47841},{"sent":"black dog","ref_id":47875},{"sent":"dog on right","ref_id":47876},{"sent":"giraffe in front","ref_id":47931},{"sent":"giraffe on left","ref_id":47932},{"sent":"left person","ref_id":47957},{"sent":"giraffe on left","ref_id":48055},{"sent":"giraffe","ref_id":48056},{"sent":"bowl of food","ref_id":48175},{"sent":"right bowl","ref_id":48176},{"sent":"zebra in front","ref_id":48302},{"sent":"right zebra","ref_id":48303},{"sent":"bottom left food","ref_id":48441},{"sent":"left piece of food","ref_id":48442},{"sent":"top right food","ref_id":48443},{"sent":"right food","ref_id":48444},{"sent":"left vase","ref_id":48545},{"sent":"right vase","ref_id":48546},{"sent":"second bike from right","ref_id":48585},{"sent":"second from right","ref_id":48586},{"sent":"second from left","ref_id":48587},{"sent":"left bike","ref_id":48588},{"sent":"second bike from right","ref_id":48589},{"sent":"right half of sandwich","ref_id":48623},{"sent":"left sandwich","ref_id":48624},{"sent":"right sandwich","ref_id":48625},{"sent":"bottom right chair","ref_id":48681},{"sent":"right couch","ref_id":48682},{"sent":"bottom left corner","ref_id":48683},{"sent":"couch on right","ref_id":48684},{"sent":"couch","ref_id":48685},{"sent":"left couch","ref_id":48686},{"sent":"left cake","ref_id":48861},{"sent":"right cake","ref_id":48862},{"sent":"left elephant","ref_id":48865},{"sent":"baby elephant","ref_id":48866},{"sent":"right elephant","ref_id":48888},{"sent":"left elephant","ref_id":48889},{"sent":"table in front","ref_id":49111},{"sent":"right truck","ref_id":49165},{"sent":"truck","ref_id":49166},{"sent":"truck","ref_id":49167},{"sent":"right truck","ref_id":49168},{"sent":"right bike","ref_id":49248},{"sent":"right bike","ref_id":49249},{"sent":"right bike","ref_id":49250},{"sent":"donut in middle","ref_id":49288},{"sent":"donut on left","ref_id":49289},{"sent":"donut in middle","ref_id":49290},{"sent":"giraffe on left","ref_id":49291},{"sent":"left giraffe","ref_id":49292},{"sent":"giraffe in front","ref_id":49293},{"sent":"left cup","ref_id":49377},{"sent":"right cup","ref_id":49378},{"sent":"left bottle","ref_id":49429},{"sent":"bottle on right","ref_id":49430},{"sent":"pizza","ref_id":49446},{"sent":"pizza slice on right","ref_id":49447},{"sent":"pizza slice on right","ref_id":49448},{"sent":"broccoli on the right","ref_id":49455},{"sent":"broccoli in the middle","ref_id":49456},{"sent":"second board from right","ref_id":49502},{"sent":"blue board","ref_id":49503},{"sent":"right plant","ref_id":49583},{"sent":"left plant","ref_id":49584},{"sent":"black suitcase","ref_id":49672},{"sent":"blue tie","ref_id":49673},{"sent":"middle bus","ref_id":49701},{"sent":"second bus from right","ref_id":49702},{"sent":"right bus","ref_id":49703},{"sent":"right suitcase","ref_id":49721},{"sent":"right side of pizza","ref_id":49781},{"sent":"right slice","ref_id":49782},{"sent":"dog on right","ref_id":49818},{"sent":"left dog","ref_id":49819},{"sent":"left car","ref_id":49824},{"sent":"white car","ref_id":49825},{"sent":"right cup","ref_id":49949},{"sent":"right cup","ref_id":49950},{"sent":"zebra in back","ref_id":49986},{"sent":"zebra on left","ref_id":49987},{"sent":"car on left","ref_id":25},{"sent":"car on left","ref_id":26},{"sent":"top sandwich","ref_id":27},{"sent":"top left donut","ref_id":28},{"sent":"zebra on left","ref_id":45},{"sent":"right zebra","ref_id":46},{"sent":"chair in front of man","ref_id":164},{"sent":"bottom right corner","ref_id":165},{"sent":"left chair","ref_id":166},{"sent":"top right corner","ref_id":232},{"sent":"pizza in front","ref_id":233},{"sent":"glass in back","ref_id":234},{"sent":"left glass","ref_id":235},{"sent":"yellow fruit on left","ref_id":259}]}
\ No newline at end of file
diff --git a/elia/requirements.txt b/requirements.txt
similarity index 100%
rename from elia/requirements.txt
rename to requirements.txt
diff --git a/scripts/test_elia.py b/scripts/test_elia.py
new file mode 100644
index 0000000000000000000000000000000000000000..0724b091ebc2bdfd338352c0f7541c20d64a7c40
--- /dev/null
+++ b/scripts/test_elia.py
@@ -0,0 +1,20 @@
+
+import os
+import sys
+
+
+refcoco_ckpt_path = "model_best_refcoco_0508.pth"
+refcocop_ckpt_path = "model_best_refcoco+_0508.pth"
+
+cmd1 = 'python test_elia.py --model="lavt" --swin_type="base" --dataset="refcoco" --split="val" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcoco_ckpt_path)
+cmd2 = 'python test_elia.py --model="lavt" --swin_type="base" --dataset="refcoco" --split="testA" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcoco_ckpt_path)
+cmd3 = 'python test_elia.py --model="lavt" --swin_type="base" --dataset="refcoco" --split="testB" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcoco_ckpt_path)
+cmd4 = 'python test_elia.py --model="lavt" --swin_type="base" --dataset="refcoco+" --split="val" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcocop_ckpt_path)
+cmd5 = 'python test_elia.py --model="lavt" --swin_type="base" --dataset="refcoco+" --split="testA" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcocop_ckpt_path)
+cmd6 = 'python test_elia.py --model="lavt" --swin_type="base" --dataset="refcoco+" --split="testB" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcocop_ckpt_path)
+
+cmds = [cmd1,cmd2,cmd3,cmd4,cmd5,cmd6]
+
+for cmd in cmds:
+ print(cmd, flush=True)
+ os.system(cmd)
diff --git a/scripts/test_lavt.py b/scripts/test_lavt.py
new file mode 100644
index 0000000000000000000000000000000000000000..eae9e074a2a81c511d5db302c4a50f460b66a180
--- /dev/null
+++ b/scripts/test_lavt.py
@@ -0,0 +1,20 @@
+
+import os
+import sys
+
+
+refcoco_ckpt_path = "/cluster/nvme4/cyx/lavt/vis/refcoco.pth"
+refcocop_ckpt_path = "/cluster/nvme4/cyx/lavt/vis/refcoco+.pth"
+
+cmd1 = 'python test_lavt.py --model="lavt" --swin_type="base" --dataset="refcoco" --split="val" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcoco_ckpt_path)
+cmd2 = 'python test_lavt.py --model="lavt" --swin_type="base" --dataset="refcoco" --split="testA" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcoco_ckpt_path)
+cmd3 = 'python test_lavt.py --model="lavt" --swin_type="base" --dataset="refcoco" --split="testB" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcoco_ckpt_path)
+cmd4 = 'python test_lavt.py --model="lavt" --swin_type="base" --dataset="refcoco+" --split="val" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcocop_ckpt_path)
+cmd5 = 'python test_lavt.py --model="lavt" --swin_type="base" --dataset="refcoco+" --split="testA" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcocop_ckpt_path)
+cmd6 = 'python test_lavt.py --model="lavt" --swin_type="base" --dataset="refcoco+" --split="testB" --resume="{:s}" --workers="4" --ddp_trained_weights --window12 --img_size="480"'.format(refcocop_ckpt_path)
+
+cmds = [cmd1,cmd2,cmd3,cmd4,cmd5,cmd6]
+
+for cmd in cmds:
+ print(cmd, flush=True)
+ os.system(cmd)
diff --git a/scripts/train_elia_refcoco+.py b/scripts/train_elia_refcoco+.py
new file mode 100644
index 0000000000000000000000000000000000000000..40f5c0bfc6c98f848fff46c64b8d9ca584fd0c10
--- /dev/null
+++ b/scripts/train_elia_refcoco+.py
@@ -0,0 +1,7 @@
+import os
+
+cmd='NCCL_IB_DISABLE=1 NCCL_DEBUG=VERSION NCCL_P2P_DISABLE=1 python -m torch.distributed.launch --nproc_per_node="8" --master_port="{:d}" train_elia.py --model="lavt" --dataset="refcoco+" --model_id="refcoco+" --batch-size="4" --lr="0.00005" --wd="1e-2" --swin_type="base" --pretrained_swin_weights="./pretrained_weights/swin_base_patch4_window12_384_22k.pth" --epochs="40" --img_size="480" --plic_pos_temp="0.1" --plic_neg_temp="0.1" --plic_lang_temp="0.3" --plic_pos_weight="0.5" --plic_neg_weight="0.5" --plic_lang_weight="0.5" --smlm_weight="1.0" --output-dir="./checkpoints/elia_refcoco+"'
+
+print(cmd, flush=True)
+
+os.system(cmd)
diff --git a/scripts/train_elia_refcoco.py b/scripts/train_elia_refcoco.py
new file mode 100644
index 0000000000000000000000000000000000000000..c8b6bb03e5dfef5d01ac71b4f0c5887cf236a60b
--- /dev/null
+++ b/scripts/train_elia_refcoco.py
@@ -0,0 +1,10 @@
+import os
+import random
+
+port = random.randrange(2, 65534)
+
+cmd='NCCL_IB_DISABLE=1 NCCL_DEBUG=VERSION NCCL_P2P_DISABLE=1 python -m torch.distributed.launch --nproc_per_node="8" --master_port="{:d}" train_elia.py --model="lavt" --dataset="refcoco" --model_id="refcoco" --batch-size="4" --lr="0.00005" --wd="1e-2" --swin_type="base" --pretrained_swin_weights="./pretrained_weights/swin_base_patch4_window12_384_22k.pth" --epochs="40" --img_size="480" --plic_pos_temp="0.1" --plic_neg_temp="0.1" --plic_lang_temp="0.3" --plic_pos_weight="0.5" --plic_neg_weight="0.5" --plic_lang_weight="0.5" --smlm_weight="0.1" --output-dir="./checkpoints/elia_refcoco"'.format(port)
+
+print(cmd, flush=True)
+
+os.system(cmd)
diff --git a/scripts/train_elia_refcoco_debug.py b/scripts/train_elia_refcoco_debug.py
new file mode 100644
index 0000000000000000000000000000000000000000..2252eaa8ac6ff322dbb3cd100158611c9692096c
--- /dev/null
+++ b/scripts/train_elia_refcoco_debug.py
@@ -0,0 +1,10 @@
+import os
+import random
+
+port = random.randrange(2, 65534)
+
+cmd='NCCL_IB_DISABLE=1 NCCL_DEBUG=VERSION NCCL_P2P_DISABLE=1 python -m torch.distributed.launch --nproc_per_node="2" --master_port="{:d}" train_elia.py --model="lavt" --dataset="refcoco" --model_id="refcoco" --batch-size="2" --lr="0.00005" --wd="1e-2" --swin_type="base" --pretrained_swin_weights="./pretrained_weights/swin_base_patch4_window12_384_22k.pth" --epochs="40" --img_size="224" --plic_pos_temp="0.1" --plic_neg_temp="0.1" --plic_lang_temp="0.3" --plic_pos_weight="0.5" --plic_neg_weight="0.5" --plic_lang_weight="0.5" --smlm_weight="0.1" --output-dir="./checkpoints/elia_refcoco_debug"'.format(port)
+
+print(cmd, flush=True)
+
+os.system(cmd)
diff --git a/scripts/train_elia_refcocog_google.py b/scripts/train_elia_refcocog_google.py
new file mode 100644
index 0000000000000000000000000000000000000000..8202a42dc453fe50c362a6e7b6ad993984c6f28a
--- /dev/null
+++ b/scripts/train_elia_refcocog_google.py
@@ -0,0 +1,7 @@
+import os
+
+cmd='NCCL_IB_DISABLE=1 NCCL_DEBUG=VERSION NCCL_P2P_DISABLE=1 python -m torch.distributed.launch --nproc_per_node="8" --master_port="{:d}" train_elia.py --model="lavt" --dataset="refcocog" --splitBy "google" --model_id="refcocog_google" --batch-size="4" --lr="0.00005" --wd="1e-2" --swin_type="base" --pretrained_swin_weights="./pretrained_weights/swin_base_patch4_window12_384_22k.pth" --epochs="40" --img_size="480" --plic_pos_temp="0.1" --plic_neg_temp="0.1" --plic_lang_temp="0.3" --plic_pos_weight="0.5" --plic_neg_weight="0.5" --plic_lang_weight="0.5" --smlm_weight="1.0" --output-dir="./checkpoints/elia_refcocog_google"'
+
+print(cmd, flush=True)
+
+os.system(cmd)
diff --git a/scripts/train_elia_refcocog_umd.py b/scripts/train_elia_refcocog_umd.py
new file mode 100644
index 0000000000000000000000000000000000000000..dcffe24bcb818cf92cdd068570846d66e4ac4a99
--- /dev/null
+++ b/scripts/train_elia_refcocog_umd.py
@@ -0,0 +1,10 @@
+import os
+import random
+
+port = random.randrange(2, 65534)
+
+cmd='NCCL_IB_DISABLE=1 NCCL_DEBUG=VERSION NCCL_P2P_DISABLE=1 python -m torch.distributed.launch --nproc_per_node="8" --master_port="{:d}" train_elia.py --model="lavt" --dataset="refcocog" --splitBy "umd" --model_id="refcocog_umd" --batch-size="4" --lr="0.00005" --wd="1e-2" --swin_type="base" --pretrained_swin_weights="./pretrained_weights/swin_base_patch4_window12_384_22k.pth" --epochs="40" --img_size="480" --plic_pos_temp="0.1" --plic_neg_temp="0.1" --plic_lang_temp="0.3" --plic_pos_weight="0.5" --plic_neg_weight="0.5" --plic_lang_weight="0.5" --smlm_weight="1.0" --output-dir="./checkpoints/elia_refcocog_umd"'.format(port)
+
+print(cmd, flush=True)
+
+os.system(cmd)
diff --git a/scripts/visualize.py b/scripts/visualize.py
new file mode 100644
index 0000000000000000000000000000000000000000..32d332ceb3cadf91575176f44cfcf3915a8a2347
--- /dev/null
+++ b/scripts/visualize.py
@@ -0,0 +1,10 @@
+
+import os
+
+
+
+
+cmd = 'python visualize.py --model="lavt" --swin_type="base" --dataset="refcoco" --split="val" --resume="model_best_refcoco.pth" --workers="4" --ddp_trained_weights --window12 --img_size="224" --vis_dir="elia_refcoco"'
+
+print(cmd, flush=True)
+os.system(cmd)
diff --git a/elia/test_elia.py b/test_elia.py
similarity index 100%
rename from elia/test_elia.py
rename to test_elia.py
diff --git a/elia/test_lavt.py b/test_lavt.py
similarity index 100%
rename from elia/test_lavt.py
rename to test_lavt.py
diff --git a/elia/train_elia.py b/train_elia.py
similarity index 100%
rename from elia/train_elia.py
rename to train_elia.py
diff --git a/elia/train_lavt.py b/train_lavt.py
similarity index 100%
rename from elia/train_lavt.py
rename to train_lavt.py
diff --git a/elia/transforms.py b/transforms.py
similarity index 100%
rename from elia/transforms.py
rename to transforms.py
diff --git a/elia/utils.py b/utils.py
similarity index 100%
rename from elia/utils.py
rename to utils.py
diff --git a/elia/visualize.py b/visualize.py
similarity index 100%
rename from elia/visualize.py
rename to visualize.py
diff --git a/elia/visualize_final.py b/visualize_final.py
similarity index 100%
rename from elia/visualize_final.py
rename to visualize_final.py