pyMEAL: Multi-Encoder-Augmentation-Aware-Learning

pyMEAL is a multi-encoder framework for augmentation-aware learning that accurately performs CT-to-T1-weighted MRI translation under diverse augmentations. It utilizes four dedicated encoders and three fusion strategies, concatenation (CC), fusion layer (FL), and controller block (BD), to capture augmentation-specific features. MEAL-BD outperforms conventional augmentation methods, achieving SSIM > 0.83 and PSNR > 25 dB in CT-to-T1w translation.

Dependecies

tensorflow

matplotlib

SimpleITK

scipy

antspyx


Available Models

Model ID File Name Description
BD builder1_mode1l1abW512_1_11211z1p1rt_.h5 Builder-based architecture model
CC best_moderRl_RHID2_1mo.h5 Encoder-concatenation-based configuration
FL bestac22_mode3l_512m2_m21.h5 Feature-level fusion-based model
NA direct7_11ag23f11.h5 Direct training baseline model
TA best_modelaf2ndab7_221ag12g11.h5 traditional augmentation configuration model

Model Architecture Overview

Model Diagram

Figure 1. Model architecture for the model having no augmentation and traditional augmentation.

Model2 Diagram

Figure 2. Model architecture for Multi-Stream with a Builder Controller block method (BD), Fusion layer (FL), and Encoder concatenation (CC).

Download Model Files

You can download any .h5 file directly:


How to Use

Load a Model (Basic)

import tensorflow as tf

# Load the model
model = tf.keras.models.load_model("model.h5", compile=False)

# Run inference
output = model.predict(input_data)

Here, input_data refers to a CT image, and the corresponding T1-weighted (T1w) image is produced as the output.

For detailed instructions on how to use each module of the pyMEAL software, please refer to the tutorial section on our GitHub repository.

How to Get Support

For help, contact:

Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support