project-monai commited on
Commit
ea09684
·
verified ·
1 Parent(s): 4fb97de

Upload swin_unetr_btcv_segmentation version 0.5.7

Browse files
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
configs/evaluate.json ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "validate#postprocessing": {
3
+ "_target_": "Compose",
4
+ "transforms": [
5
+ {
6
+ "_target_": "Activationsd",
7
+ "keys": "pred",
8
+ "softmax": true
9
+ },
10
+ {
11
+ "_target_": "Invertd",
12
+ "keys": [
13
+ "pred",
14
+ "label"
15
+ ],
16
+ "transform": "@validate#preprocessing",
17
+ "orig_keys": "image",
18
+ "meta_key_postfix": "meta_dict",
19
+ "nearest_interp": [
20
+ false,
21
+ true
22
+ ],
23
+ "to_tensor": true
24
+ },
25
+ {
26
+ "_target_": "AsDiscreted",
27
+ "keys": [
28
+ "pred",
29
+ "label"
30
+ ],
31
+ "argmax": [
32
+ true,
33
+ false
34
+ ],
35
+ "to_onehot": 14
36
+ },
37
+ {
38
+ "_target_": "SaveImaged",
39
+ "keys": "pred",
40
+ "meta_keys": "pred_meta_dict",
41
+ "output_dir": "@output_dir",
42
+ "resample": false,
43
+ "squeeze_end_dims": true
44
+ }
45
+ ]
46
+ },
47
+ "validate#handlers": [
48
+ {
49
+ "_target_": "CheckpointLoader",
50
+ "load_path": "$@ckpt_dir + '/model.pt'",
51
+ "load_dict": {
52
+ "model": "@network"
53
+ }
54
+ },
55
+ {
56
+ "_target_": "StatsHandler",
57
+ "iteration_log": false
58
+ },
59
+ {
60
+ "_target_": "MetricsSaver",
61
+ "save_dir": "@output_dir",
62
+ "metrics": [
63
+ "val_mean_dice",
64
+ "val_acc"
65
+ ],
66
+ "metric_details": [
67
+ "val_mean_dice"
68
+ ],
69
+ "batch_transform": "$lambda x: [xx['image'].meta for xx in x]",
70
+ "summary_ops": "*"
71
+ }
72
+ ],
73
+ "run": [
74
+ "$@validate#evaluator.run()"
75
+ ]
76
+ }
configs/inference.json ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "imports": [
3
+ "$import glob",
4
+ "$import numpy",
5
+ "$import os"
6
+ ],
7
+ "bundle_root": ".",
8
+ "checkpoint": "$@bundle_root + '/models/model.pt'",
9
+ "image_key": "image",
10
+ "output_dir": "$@bundle_root + '/eval'",
11
+ "output_ext": ".nii.gz",
12
+ "output_dtype": "$numpy.float32",
13
+ "output_postfix": "trans",
14
+ "separate_folder": true,
15
+ "load_pretrain": true,
16
+ "dataset_dir": "/workspace/data/RawData/",
17
+ "datalist": "$list(sorted(glob.glob(@dataset_dir + '/imagesTs/*.nii.gz')))",
18
+ "device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
19
+ "network_def": {
20
+ "_target_": "SwinUNETR",
21
+ "spatial_dims": 3,
22
+ "img_size": 96,
23
+ "in_channels": 1,
24
+ "out_channels": 14,
25
+ "feature_size": 48,
26
+ "use_checkpoint": false
27
+ },
28
+ "network": "$@network_def.to(@device)",
29
+ "preprocessing": {
30
+ "_target_": "Compose",
31
+ "transforms": [
32
+ {
33
+ "_target_": "LoadImaged",
34
+ "keys": "@image_key",
35
+ "reader": "ITKReader"
36
+ },
37
+ {
38
+ "_target_": "EnsureChannelFirstd",
39
+ "keys": "@image_key"
40
+ },
41
+ {
42
+ "_target_": "Orientationd",
43
+ "keys": "@image_key",
44
+ "axcodes": "RAS"
45
+ },
46
+ {
47
+ "_target_": "Spacingd",
48
+ "keys": "@image_key",
49
+ "pixdim": [
50
+ 1.5,
51
+ 1.5,
52
+ 2.0
53
+ ],
54
+ "mode": "bilinear"
55
+ },
56
+ {
57
+ "_target_": "ScaleIntensityRanged",
58
+ "keys": "@image_key",
59
+ "a_min": -175,
60
+ "a_max": 250,
61
+ "b_min": 0.0,
62
+ "b_max": 1.0,
63
+ "clip": true
64
+ },
65
+ {
66
+ "_target_": "EnsureTyped",
67
+ "keys": "@image_key"
68
+ }
69
+ ]
70
+ },
71
+ "dataset": {
72
+ "_target_": "Dataset",
73
+ "data": "$[{'image': i} for i in @datalist]",
74
+ "transform": "@preprocessing"
75
+ },
76
+ "dataloader": {
77
+ "_target_": "DataLoader",
78
+ "dataset": "@dataset",
79
+ "batch_size": 1,
80
+ "shuffle": false,
81
+ "num_workers": 4
82
+ },
83
+ "inferer": {
84
+ "_target_": "SlidingWindowInferer",
85
+ "roi_size": [
86
+ 96,
87
+ 96,
88
+ 96
89
+ ],
90
+ "sw_batch_size": 4,
91
+ "overlap": 0.5
92
+ },
93
+ "postprocessing": {
94
+ "_target_": "Compose",
95
+ "transforms": [
96
+ {
97
+ "_target_": "Activationsd",
98
+ "keys": "pred",
99
+ "softmax": true
100
+ },
101
+ {
102
+ "_target_": "Invertd",
103
+ "keys": "pred",
104
+ "transform": "@preprocessing",
105
+ "orig_keys": "@image_key",
106
+ "nearest_interp": false,
107
+ "to_tensor": true
108
+ },
109
+ {
110
+ "_target_": "AsDiscreted",
111
+ "keys": "pred",
112
+ "argmax": true
113
+ },
114
+ {
115
+ "_target_": "SaveImaged",
116
+ "keys": "pred",
117
+ "output_dir": "@output_dir",
118
+ "output_ext": "@output_ext",
119
+ "output_dtype": "@output_dtype",
120
+ "output_postfix": "@output_postfix",
121
+ "separate_folder": "@separate_folder"
122
+ }
123
+ ]
124
+ },
125
+ "handlers": [
126
+ {
127
+ "_target_": "StatsHandler",
128
+ "iteration_log": false
129
+ }
130
+ ],
131
+ "evaluator": {
132
+ "_target_": "SupervisedEvaluator",
133
+ "device": "@device",
134
+ "val_data_loader": "@dataloader",
135
+ "network": "@network",
136
+ "inferer": "@inferer",
137
+ "postprocessing": "@postprocessing",
138
+ "val_handlers": "@handlers",
139
+ "amp": true
140
+ },
141
+ "checkpointloader": {
142
+ "_target_": "CheckpointLoader",
143
+ "load_path": "$@checkpoint",
144
+ "load_dict": {
145
+ "model": "@network"
146
+ }
147
+ },
148
+ "initialize": [
149
+ "$monai.utils.set_determinism(seed=123)",
150
+ "$@checkpointloader(@evaluator) if @load_pretrain else None"
151
+ ],
152
+ "run": [
153
154
+ ]
155
+ }
configs/inference_trt.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "+imports": [
3
+ "$from monai.networks import trt_compile"
4
+ ],
5
+ "trt_args": {
6
+ "dynamic_batchsize": "$[1, @inferer#sw_batch_size, @inferer#sw_batch_size]"
7
+ },
8
+ "network": "$trt_compile(@network_def.to(@device), @checkpoint, args=@trt_args)"
9
+ }
configs/logging.conf ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [loggers]
2
+ keys=root
3
+
4
+ [handlers]
5
+ keys=consoleHandler
6
+
7
+ [formatters]
8
+ keys=fullFormatter
9
+
10
+ [logger_root]
11
+ level=INFO
12
+ handlers=consoleHandler
13
+
14
+ [handler_consoleHandler]
15
+ class=StreamHandler
16
+ level=INFO
17
+ formatter=fullFormatter
18
+ args=(sys.stdout,)
19
+
20
+ [formatter_fullFormatter]
21
+ format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
configs/metadata.json ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20240725.json",
3
+ "version": "0.5.7",
4
+ "changelog": {
5
+ "0.5.7": "update to huggingface hosting",
6
+ "0.5.6": "update tensorrt benchmark results",
7
+ "0.5.5": "enable tensorrt",
8
+ "0.5.4": "update to use monai 1.3.1",
9
+ "0.5.3": "add load_pretrain flag for infer",
10
+ "0.5.2": "add checkpoint loader for infer",
11
+ "0.5.1": "remove meta_dict usage",
12
+ "0.5.0": "fix the wrong GPU index issue of multi-node",
13
+ "0.4.9": "remove error dollar symbol in readme",
14
+ "0.4.8": "add RAM usage with CacheDataset",
15
+ "0.4.7": "deterministic retrain benchmark",
16
+ "0.4.6": "fix mgpu finalize issue",
17
+ "0.4.5": "enable deterministic training",
18
+ "0.4.4": "update numbers",
19
+ "0.4.3": "adapt to BundleWorkflow interface",
20
+ "0.4.2": "fix train params of use_checkpoint",
21
+ "0.4.1": "update params to supprot torch.jit.trace torchscript conversion",
22
+ "0.4.0": "add name tag",
23
+ "0.3.9": "use ITKreader to avoid mass logs at image loading",
24
+ "0.3.8": "restructure readme to match updated template",
25
+ "0.3.7": "Update metric in metadata",
26
+ "0.3.6": "Update ckpt drive link",
27
+ "0.3.5": "Update figure and benchmarking",
28
+ "0.3.4": "Update figure link in readme",
29
+ "0.3.3": "Update, verify MONAI 1.0.1 and Pytorch 1.13.0",
30
+ "0.3.2": "enhance readme on commands example",
31
+ "0.3.1": "fix license Copyright error",
32
+ "0.3.0": "update license files",
33
+ "0.2.0": "unify naming",
34
+ "0.1.0": "complete the model package",
35
+ "0.0.1": "initialize the model package structure"
36
+ },
37
+ "monai_version": "1.4.0",
38
+ "pytorch_version": "2.4.0",
39
+ "numpy_version": "1.24.4",
40
+ "required_packages_version": {
41
+ "itk": "5.4.0",
42
+ "nibabel": "5.2.1",
43
+ "pytorch-ignite": "0.4.11",
44
+ "einops": "0.7.0",
45
+ "tensorboard": "2.17.0"
46
+ },
47
+ "supported_apps": {},
48
+ "name": "Swin UNETR BTCV segmentation",
49
+ "task": "BTCV multi-organ segmentation",
50
+ "description": "A pre-trained model for volumetric (3D) multi-organ segmentation from CT image",
51
+ "authors": "MONAI team",
52
+ "copyright": "Copyright (c) MONAI Consortium",
53
+ "data_source": "RawData.zip from https://www.synapse.org/#!Synapse:syn3193805/wiki/217752/",
54
+ "data_type": "nibabel",
55
+ "image_classes": "single channel data, intensity scaled to [0, 1]",
56
+ "label_classes": "multi-channel data,0:background,1:spleen, 2:Right Kidney, 3:Left Kideny, 4:Gallbladder, 5:Esophagus, 6:Liver, 7:Stomach, 8:Aorta, 9:IVC, 10:Portal and Splenic Veins, 11:Pancreas, 12:Right adrenal gland, 13:Left adrenal gland",
57
+ "pred_classes": "14 channels OneHot data, 0:background,1:spleen, 2:Right Kidney, 3:Left Kideny, 4:Gallbladder, 5:Esophagus, 6:Liver, 7:Stomach, 8:Aorta, 9:IVC, 10:Portal and Splenic Veins, 11:Pancreas, 12:Right adrenal gland, 13:Left adrenal gland",
58
+ "eval_metrics": {
59
+ "mean_dice": 0.82
60
+ },
61
+ "intended_use": "This is an example, not to be used for diagnostic purposes",
62
+ "references": [
63
+ "Hatamizadeh, Ali, et al. 'Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. arXiv preprint arXiv:2201.01266 (2022). https://arxiv.org/abs/2201.01266.",
64
+ "Tang, Yucheng, et al. 'Self-supervised pre-training of swin transformers for 3d medical image analysis. arXiv preprint arXiv:2111.14791 (2021). https://arxiv.org/abs/2111.14791."
65
+ ],
66
+ "network_data_format": {
67
+ "inputs": {
68
+ "image": {
69
+ "type": "image",
70
+ "format": "hounsfield",
71
+ "modality": "CT",
72
+ "num_channels": 1,
73
+ "spatial_shape": [
74
+ 96,
75
+ 96,
76
+ 96
77
+ ],
78
+ "dtype": "float32",
79
+ "value_range": [
80
+ 0,
81
+ 1
82
+ ],
83
+ "is_patch_data": true,
84
+ "channel_def": {
85
+ "0": "image"
86
+ }
87
+ }
88
+ },
89
+ "outputs": {
90
+ "pred": {
91
+ "type": "image",
92
+ "format": "segmentation",
93
+ "num_channels": 14,
94
+ "spatial_shape": [
95
+ 96,
96
+ 96,
97
+ 96
98
+ ],
99
+ "dtype": "float32",
100
+ "value_range": [
101
+ 0,
102
+ 1
103
+ ],
104
+ "is_patch_data": true,
105
+ "channel_def": {
106
+ "0": "background",
107
+ "1": "spleen",
108
+ "2": "Right Kidney",
109
+ "3": "Left Kideny",
110
+ "4": "Gallbladder",
111
+ "5": "Esophagus",
112
+ "6": "Liver",
113
+ "7": "Stomach",
114
+ "8": "Aorta",
115
+ "9": "IVC",
116
+ "10": "Portal and Splenic Veins",
117
+ "11": "Pancreas",
118
+ "12": "Right adrenal gland",
119
+ "13": "Left adrenal gland"
120
+ }
121
+ }
122
+ }
123
+ }
124
+ }
configs/multi_gpu_train.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "device": "$torch.device('cuda:' + os.environ['LOCAL_RANK'])",
3
+ "network": {
4
+ "_target_": "torch.nn.parallel.DistributedDataParallel",
5
+ "module": "$@network_def.to(@device)",
6
+ "device_ids": [
7
+ "@device"
8
+ ]
9
+ },
10
+ "train#sampler": {
11
+ "_target_": "DistributedSampler",
12
+ "dataset": "@train#dataset",
13
+ "even_divisible": true,
14
+ "shuffle": true
15
+ },
16
+ "train#dataloader#sampler": "@train#sampler",
17
+ "train#dataloader#shuffle": false,
18
+ "train#trainer#train_handlers": "$@train#handlers[: -2 if dist.get_rank() > 0 else None]",
19
+ "validate#sampler": {
20
+ "_target_": "DistributedSampler",
21
+ "dataset": "@validate#dataset",
22
+ "even_divisible": false,
23
+ "shuffle": false
24
+ },
25
+ "validate#dataloader#sampler": "@validate#sampler",
26
+ "validate#evaluator#val_handlers": "$None if dist.get_rank() > 0 else @validate#handlers",
27
+ "initialize": [
28
+ "$import torch.distributed as dist",
29
+ "$dist.is_initialized() or dist.init_process_group(backend='nccl')",
30
+ "$torch.cuda.set_device(@device)",
31
+ "$monai.utils.set_determinism(seed=123)"
32
+ ],
33
+ "run": [
34
+ "$@train#trainer.run()"
35
+ ],
36
+ "finalize": [
37
+ "$dist.is_initialized() and dist.destroy_process_group()"
38
+ ]
39
+ }
configs/train.json ADDED
@@ -0,0 +1,327 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "imports": [
3
+ "$import glob",
4
+ "$import os",
5
+ "$import ignite"
6
+ ],
7
+ "bundle_root": ".",
8
+ "ckpt_dir": "$@bundle_root + '/models'",
9
+ "output_dir": "$@bundle_root + '/eval'",
10
+ "dataset_dir": "/workspace/data/RawData/",
11
+ "images": "$list(sorted(glob.glob(@dataset_dir + '/imagesTr/*.nii.gz')))",
12
+ "labels": "$list(sorted(glob.glob(@dataset_dir + '/labelsTr/*.nii.gz')))",
13
+ "val_interval": 5,
14
+ "device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
15
+ "network_def": {
16
+ "_target_": "SwinUNETR",
17
+ "spatial_dims": 3,
18
+ "img_size": 96,
19
+ "in_channels": 1,
20
+ "out_channels": 14,
21
+ "feature_size": 48,
22
+ "use_checkpoint": true
23
+ },
24
+ "network": "$@network_def.to(@device)",
25
+ "loss": {
26
+ "_target_": "DiceCELoss",
27
+ "to_onehot_y": true,
28
+ "softmax": true,
29
+ "squared_pred": true,
30
+ "batch": true
31
+ },
32
+ "optimizer": {
33
+ "_target_": "torch.optim.Adam",
34
+ "params": "[email protected]()",
35
+ "lr": 0.0002
36
+ },
37
+ "train": {
38
+ "deterministic_transforms": [
39
+ {
40
+ "_target_": "LoadImaged",
41
+ "keys": [
42
+ "image",
43
+ "label"
44
+ ],
45
+ "reader": "ITKReader"
46
+ },
47
+ {
48
+ "_target_": "EnsureChannelFirstd",
49
+ "keys": [
50
+ "image",
51
+ "label"
52
+ ]
53
+ },
54
+ {
55
+ "_target_": "Orientationd",
56
+ "keys": [
57
+ "image",
58
+ "label"
59
+ ],
60
+ "axcodes": "RAS"
61
+ },
62
+ {
63
+ "_target_": "Spacingd",
64
+ "keys": [
65
+ "image",
66
+ "label"
67
+ ],
68
+ "pixdim": [
69
+ 1.5,
70
+ 1.5,
71
+ 2.0
72
+ ],
73
+ "mode": [
74
+ "bilinear",
75
+ "nearest"
76
+ ]
77
+ },
78
+ {
79
+ "_target_": "ScaleIntensityRanged",
80
+ "keys": "image",
81
+ "a_min": -175,
82
+ "a_max": 250,
83
+ "b_min": 0.0,
84
+ "b_max": 1.0,
85
+ "clip": true
86
+ },
87
+ {
88
+ "_target_": "EnsureTyped",
89
+ "keys": [
90
+ "image",
91
+ "label"
92
+ ]
93
+ }
94
+ ],
95
+ "random_transforms": [
96
+ {
97
+ "_target_": "RandCropByPosNegLabeld",
98
+ "keys": [
99
+ "image",
100
+ "label"
101
+ ],
102
+ "label_key": "label",
103
+ "spatial_size": [
104
+ 96,
105
+ 96,
106
+ 96
107
+ ],
108
+ "pos": 1,
109
+ "neg": 1,
110
+ "num_samples": 2,
111
+ "image_key": "image",
112
+ "image_threshold": 0
113
+ },
114
+ {
115
+ "_target_": "RandFlipd",
116
+ "keys": [
117
+ "image",
118
+ "label"
119
+ ],
120
+ "spatial_axis": [
121
+ 0
122
+ ],
123
+ "prob": 0.1
124
+ },
125
+ {
126
+ "_target_": "RandFlipd",
127
+ "keys": [
128
+ "image",
129
+ "label"
130
+ ],
131
+ "spatial_axis": [
132
+ 1
133
+ ],
134
+ "prob": 0.1
135
+ },
136
+ {
137
+ "_target_": "RandFlipd",
138
+ "keys": [
139
+ "image",
140
+ "label"
141
+ ],
142
+ "spatial_axis": [
143
+ 2
144
+ ],
145
+ "prob": 0.1
146
+ },
147
+ {
148
+ "_target_": "RandRotate90d",
149
+ "keys": [
150
+ "image",
151
+ "label"
152
+ ],
153
+ "max_k": 3,
154
+ "prob": 0.1
155
+ },
156
+ {
157
+ "_target_": "RandShiftIntensityd",
158
+ "keys": "image",
159
+ "offsets": 0.1,
160
+ "prob": 0.5
161
+ }
162
+ ],
163
+ "preprocessing": {
164
+ "_target_": "Compose",
165
+ "transforms": "$@train#deterministic_transforms + @train#random_transforms"
166
+ },
167
+ "dataset": {
168
+ "_target_": "CacheDataset",
169
+ "data": "$[{'image': i, 'label': l} for i, l in zip(@images[:-9], @labels[:-9])]",
170
+ "transform": "@train#preprocessing",
171
+ "cache_rate": 1.0,
172
+ "num_workers": 4
173
+ },
174
+ "dataloader": {
175
+ "_target_": "DataLoader",
176
+ "dataset": "@train#dataset",
177
+ "batch_size": 2,
178
+ "shuffle": true,
179
+ "num_workers": 4
180
+ },
181
+ "inferer": {
182
+ "_target_": "SimpleInferer"
183
+ },
184
+ "postprocessing": {
185
+ "_target_": "Compose",
186
+ "transforms": [
187
+ {
188
+ "_target_": "Activationsd",
189
+ "keys": "pred",
190
+ "softmax": true
191
+ },
192
+ {
193
+ "_target_": "AsDiscreted",
194
+ "keys": [
195
+ "pred",
196
+ "label"
197
+ ],
198
+ "argmax": [
199
+ true,
200
+ false
201
+ ],
202
+ "to_onehot": 14
203
+ }
204
+ ]
205
+ },
206
+ "handlers": [
207
+ {
208
+ "_target_": "ValidationHandler",
209
+ "validator": "@validate#evaluator",
210
+ "epoch_level": true,
211
+ "interval": "@val_interval"
212
+ },
213
+ {
214
+ "_target_": "StatsHandler",
215
+ "tag_name": "train_loss",
216
+ "output_transform": "$monai.handlers.from_engine(['loss'], first=True)"
217
+ },
218
+ {
219
+ "_target_": "TensorBoardStatsHandler",
220
+ "log_dir": "@output_dir",
221
+ "tag_name": "train_loss",
222
+ "output_transform": "$monai.handlers.from_engine(['loss'], first=True)"
223
+ }
224
+ ],
225
+ "key_metric": {
226
+ "train_accuracy": {
227
+ "_target_": "ignite.metrics.Accuracy",
228
+ "output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
229
+ }
230
+ },
231
+ "trainer": {
232
+ "_target_": "SupervisedTrainer",
233
+ "max_epochs": 500,
234
+ "device": "@device",
235
+ "train_data_loader": "@train#dataloader",
236
+ "network": "@network",
237
+ "loss_function": "@loss",
238
+ "optimizer": "@optimizer",
239
+ "inferer": "@train#inferer",
240
+ "postprocessing": "@train#postprocessing",
241
+ "key_train_metric": "@train#key_metric",
242
+ "train_handlers": "@train#handlers",
243
+ "amp": true
244
+ }
245
+ },
246
+ "validate": {
247
+ "preprocessing": {
248
+ "_target_": "Compose",
249
+ "transforms": "%train#deterministic_transforms"
250
+ },
251
+ "dataset": {
252
+ "_target_": "CacheDataset",
253
+ "data": "$[{'image': i, 'label': l} for i, l in zip(@images[-9:], @labels[-9:])]",
254
+ "transform": "@validate#preprocessing",
255
+ "cache_rate": 1.0
256
+ },
257
+ "dataloader": {
258
+ "_target_": "DataLoader",
259
+ "dataset": "@validate#dataset",
260
+ "batch_size": 1,
261
+ "shuffle": false,
262
+ "num_workers": 4
263
+ },
264
+ "inferer": {
265
+ "_target_": "SlidingWindowInferer",
266
+ "roi_size": [
267
+ 96,
268
+ 96,
269
+ 96
270
+ ],
271
+ "sw_batch_size": 2,
272
+ "overlap": 0.25
273
+ },
274
+ "postprocessing": "%train#postprocessing",
275
+ "handlers": [
276
+ {
277
+ "_target_": "StatsHandler",
278
+ "iteration_log": false
279
+ },
280
+ {
281
+ "_target_": "TensorBoardStatsHandler",
282
+ "log_dir": "@output_dir",
283
+ "iteration_log": false
284
+ },
285
+ {
286
+ "_target_": "CheckpointSaver",
287
+ "save_dir": "@ckpt_dir",
288
+ "save_dict": {
289
+ "model": "@network"
290
+ },
291
+ "save_key_metric": true,
292
+ "key_metric_filename": "model.pt"
293
+ }
294
+ ],
295
+ "key_metric": {
296
+ "val_mean_dice": {
297
+ "_target_": "MeanDice",
298
+ "include_background": false,
299
+ "output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
300
+ }
301
+ },
302
+ "additional_metrics": {
303
+ "val_accuracy": {
304
+ "_target_": "ignite.metrics.Accuracy",
305
+ "output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
306
+ }
307
+ },
308
+ "evaluator": {
309
+ "_target_": "SupervisedEvaluator",
310
+ "device": "@device",
311
+ "val_data_loader": "@validate#dataloader",
312
+ "network": "@network",
313
+ "inferer": "@validate#inferer",
314
+ "postprocessing": "@validate#postprocessing",
315
+ "key_val_metric": "@validate#key_metric",
316
+ "additional_metrics": "@validate#additional_metrics",
317
+ "val_handlers": "@validate#handlers",
318
+ "amp": true
319
+ }
320
+ },
321
+ "initialize": [
322
+ "$monai.utils.set_determinism(seed=123)"
323
+ ],
324
+ "run": [
325
+ "$@train#trainer.run()"
326
+ ]
327
+ }
docs/README.md ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Overview
2
+ A pre-trained Swin UNETR [1,2] for volumetric (3D) multi-organ segmentation using CT images from Beyond the Cranial Vault (BTCV) Segmentation Challenge dataset [3].
3
+
4
+ ![model workflow](https://developer.download.nvidia.com/assets/Clara/Images/monai_swin_unetr_btcv_segmentation_workflow_v1.png)
5
+
6
+ ## Data
7
+ The training data is from the [BTCV dataset](https://www.synapse.org/#!Synapse:syn3193805/wiki/89480/) (Register through `Synapse` and download the `Abdomen/RawData.zip`).
8
+
9
+ - Target: Multi-organs
10
+ - Task: Segmentation
11
+ - Modality: CT
12
+ - Size: 30 3D volumes (24 Training + 6 Testing)
13
+
14
+ ### Preprocessing
15
+ The dataset format needs to be redefined using the following commands:
16
+
17
+ ```
18
+ unzip RawData.zip
19
+ mv RawData/Training/img/ RawData/imagesTr
20
+ mv RawData/Training/label/ RawData/labelsTr
21
+ mv RawData/Testing/img/ RawData/imagesTs
22
+ ```
23
+
24
+ ## Training configuration
25
+ The training as performed with the following:
26
+ - GPU: At least 32GB of GPU memory
27
+ - Actual Model Input: 96 x 96 x 96
28
+ - AMP: True
29
+ - Optimizer: Adam
30
+ - Learning Rate: 2e-4
31
+
32
+ ### Memory Consumption
33
+
34
+ - Dataset Manager: CacheDataset
35
+ - Data Size: 30 samples
36
+ - Cache Rate: 1.0
37
+ - Single GPU - System RAM Usage: 5.8G
38
+
39
+ ### Memory Consumption Warning
40
+
41
+ If you face memory issues with CacheDataset, you can either switch to a regular Dataset class or lower the caching rate `cache_rate` in the configurations within range [0, 1] to minimize the System RAM requirements.
42
+
43
+ ### Input
44
+ 1 channel
45
+ - CT image
46
+
47
+ ### Output
48
+ 14 channels:
49
+ - 0: Background
50
+ - 1: Spleen
51
+ - 2: Right Kidney
52
+ - 3: Left Kideny
53
+ - 4: Gallbladder
54
+ - 5: Esophagus
55
+ - 6: Liver
56
+ - 7: Stomach
57
+ - 8: Aorta
58
+ - 9: IVC
59
+ - 10: Portal and Splenic Veins
60
+ - 11: Pancreas
61
+ - 12: Right adrenal gland
62
+ - 13: Left adrenal gland
63
+
64
+ ## Performance
65
+ Dice score was used for evaluating the performance of the model. This model achieves a mean dice score of 0.82
66
+
67
+ #### Training Loss
68
+ ![The figure shows the training loss curve for 10K iterations.](https://developer.download.nvidia.com/assets/Clara/Images/monai_swin_unetr_btcv_segmentation_train_loss_v2.png)
69
+
70
+ #### Validation Dice
71
+
72
+ ![A graph showing the validation mean Dice for 5000 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_swin_unetr_btcv_segmentation_val_dice_v2.png)
73
+
74
+ #### TensorRT speedup
75
+ The `swin_unetr` bundle supports acceleration with TensorRT. The table below displays the speedup ratios observed on an A100 80G GPU. Please note that 32-bit precision models are benchmarked with tf32 weight format.
76
+
77
+ | method | torch_tf32(ms) | torch_amp(ms) | trt_tf32(ms) | trt_fp16(ms) | speedup amp | speedup tf32 | speedup fp16 | amp vs fp16|
78
+ | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
79
+ | model computation | 123.64 | 123.77 | 93.22 | 42.87 | 1.00 | 1.33 | 2.88 | 2.89 |
80
+ | end2end | 5102 | 4895 | 2863 | 2835 | 1.04 | 1.78 | 1.80 | 1.73 |
81
+
82
+ Where:
83
+ - `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing
84
+ - `end2end` means run the bundle end-to-end with the TensorRT based model.
85
+ - `torch_tf32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
86
+ - `trt_tf32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
87
+ - `speedup amp`, `speedup tf32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
88
+ - `amp vs fp16` is the speedup ratio between the PyTorch amp model and the TensorRT float16 based model.
89
+
90
+ This result is benchmarked under:
91
+ - TensorRT: 10.3.0+cuda12.6
92
+ - Torch-TensorRT Version: 2.4.0
93
+ - CPU Architecture: x86-64
94
+ - OS: ubuntu 20.04
95
+ - Python version:3.10.12
96
+ - CUDA version: 12.6
97
+ - GPU models and configuration: A100 80G
98
+
99
+ ## MONAI Bundle Commands
100
+ In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file.
101
+
102
+ For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html).
103
+
104
+ #### Execute training:
105
+
106
+ ```
107
+ python -m monai.bundle run --config_file configs/train.json
108
+ ```
109
+
110
+ Please note that if the default dataset path is not modified with the actual path in the bundle config files, you can also override it by using `--dataset_dir`:
111
+
112
+ ```
113
+ python -m monai.bundle run --config_file configs/train.json --dataset_dir <actual dataset path>
114
+ ```
115
+
116
+ #### Override the `train` config to execute multi-GPU training:
117
+
118
+ ```
119
+ torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run --config_file "['configs/train.json','configs/multi_gpu_train.json']"
120
+ ```
121
+
122
+ Please note that the distributed training-related options depend on the actual running environment; thus, users may need to remove `--standalone`, modify `--nnodes`, or do some other necessary changes according to the machine used. For more details, please refer to [pytorch's official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
123
+
124
+ #### Override the `train` config to execute evaluation with the trained model:
125
+
126
+ ```
127
+ python -m monai.bundle run --config_file "['configs/train.json','configs/evaluate.json']"
128
+ ```
129
+
130
+ #### Execute inference:
131
+
132
+ ```
133
+ python -m monai.bundle run --config_file configs/inference.json
134
+ ```
135
+
136
+ #### Execute inference with the TensorRT model:
137
+
138
+ ```
139
+ python -m monai.bundle run --config_file "['configs/inference.json', 'configs/inference_trt.json']"
140
+ ```
141
+
142
+
143
+ # References
144
+ [1] Hatamizadeh, Ali, et al. "Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images." arXiv preprint arXiv:2201.01266 (2022). https://arxiv.org/abs/2201.01266.
145
+
146
+ [2] Tang, Yucheng, et al. "Self-supervised pre-training of swin transformers for 3d medical image analysis." arXiv preprint arXiv:2111.14791 (2021). https://arxiv.org/abs/2111.14791.
147
+
148
+ [3] Landman B, et al. "MICCAI multi-atlas labeling beyond the cranial vault–workshop and challenge." In Proc. of the MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge 2015 Oct (Vol. 5, p. 12).
149
+
150
+ # License
151
+ Copyright (c) MONAI Consortium
152
+
153
+ Licensed under the Apache License, Version 2.0 (the "License");
154
+ you may not use this file except in compliance with the License.
155
+ You may obtain a copy of the License at
156
+
157
+ http://www.apache.org/licenses/LICENSE-2.0
158
+
159
+ Unless required by applicable law or agreed to in writing, software
160
+ distributed under the License is distributed on an "AS IS" BASIS,
161
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
162
+ See the License for the specific language governing permissions and
163
+ limitations under the License.
docs/data_license.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ Third Party Licenses
2
+ -----------------------------------------------------------------------
3
+
4
+ /*********************************************************************/
5
+ i. Medical Segmentation Decathlon
6
+ http://medicaldecathlon.com/
models/model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52e7c3114444e41bb14f644e0dd2b7d42d70ad4b4dec0c1bfa4a552a4b92a096
3
+ size 256336065