Datasets:
You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
π‘οΈ Data Usage Agreement
By accessing and using the dataset, you agree to the following terms and conditions:
Purpose of Use
This dataset is provided solely for research and educational purposes. Any commercial use is strictly prohibited without explicit written permission from the dataset creators.Ethical Use
You agree to use this dataset in an ethical manner, respecting human dignity, privacy, and all applicable laws and regulations. The data must not be used to attempt to identify individuals or for any discriminatory or harmful purposes.Data Privacy
This dataset may contain sensitive medical information. Although all personally identifiable information (PII) has been removed or anonymized to the best extent possible, you acknowledge your responsibility in ensuring that data remains de-identified and is not re-identified.Compliance with Regulations
You agree to comply with all applicable data protection regulations such as HIPAA, GDPR, or local equivalents.No Redistribution
You shall not share, redistribute, or publish the dataset in full or in part without explicit consent from the dataset authors.Attribution
Any published work or presentation using this dataset must cite the original source as specified in the dataset documentation.Indemnity
You agree to hold harmless and indemnify the dataset providers from and against any claims arising from your use of the dataset.Revocation of Access
The dataset creators reserve the right to revoke access to the dataset at any time, for any reason, including violations of this agreement.
Log in or Sign Up to review the conditions and access this dataset content.
ODELIA Challenge Dataset
This dataset is part of the ODELIA project, a European Horizon initiative focused on developing privacy-preserving, AI-driven diagnostic tools using swarm learning.
The dataset provided here represents a curated subset of data from the broader ODELIA consortium. It is designed to facilitate the development, benchmarking, and validation of AI algorithms that can operate effectively across a range of heterogeneous clinical settings.
The dataset contains breast MR images along with corresponding lesion labels. For a comprehensive description of the dataset and its intended use, please refer to our paper: Read the paper
How to Use
Prerequisites
Ensure you have the following dependencies installed:
pip install datasets torchio numpy pandas tqdm
Configurations
This dataset is available in two configurations.
Name | Size | Image Size |
---|---|---|
default (original) | 96GB | variable |
unilateral | 39GB | 256x256x32 |
Option A: Use within the Hugging Face Framework
If you want to use the dataset directly within the Hugging Face datasets
library, you can load it as follows:
from datasets import load_dataset
import numpy as np
# Load the dataset
dataset = load_dataset("ODELIA-AI/ODELIA-Challenge-2025", name="default")
# Access the training split
ds_train = dataset['train']
# Retrieve a single sample from the training set
item = ds_train[0]
# Access an image
tensor = np.array(item['Image_T2'], dtype=np.int16)
affine = np.array(item['Affine_T2'], dtype=np.float64)
# Print metadata (excluding the image itself)
for key in item.keys():
if not key.startswith('Image') and not key.startswith('Affine'):
print(f"{key}: {item[key]}")
Option B: Downloading the Dataset
If you prefer to download the dataset to a specific folder, use the following script. This will create the following folder structure:
CenterA/
βββ data/
β βββ Pre.nii.gz
β βββ Post_1.nii.gz
β βββ ...
βββ metadata/
βββ annotation.csv
βββ split.csv
CenterB/
βββ data/
βββ metadata/
from pathlib import Path
from datasets import load_dataset
import torchio as tio
import numpy as np
import pandas as pd
from tqdm import tqdm
# --------------------- Settings ---------------------
repo_id = "ODELIA-AI/ODELIA-Challenge-2025"
config = "unilateral" # "default" or "unilateral"
output_root = Path("./dataset_downloaded")
# Load dataset in streaming mode
dataset = load_dataset(repo_id, name=config, streaming=True)
dir_config = {
"default": {
"data": "data",
"metadata": "metadata",
},
"unilateral": {
"data": "data_unilateral",
"metadata": "metadata_unilateral",
},
}
# Process dataset
metadata = []
for split, split_dataset in dataset.items():
print("-------- Start Download - Split: ", split, " --------")
for item in tqdm(split_dataset, desc="Downloading"): # Stream data one-by-one
uid = item["UID"]
institution = item["Institution"]
img_names = [name.split("Image_")[1] for name in item.keys() if name.startswith("Image")]
# Create output folder
path_folder = output_root / institution / dir_config[config]["data"] / uid
path_folder.mkdir(parents=True, exist_ok=True)
for img_name in img_names:
img_data = item.pop(f"Image_{img_name}")
img_affine = item.pop(f"Affine_{img_name}")
# Skip if image data is None
if img_data is None:
continue
# Extract image data and affine matrix
img_data = np.array(img_data, dtype=np.int16)
img_affine = np.array(img_affine, dtype=np.float64)
img = tio.ScalarImage(tensor=img_data, affine=img_affine)
# Save image
img.save(path_folder / f"{img_name}.nii.gz")
# Store metadata
metadata.append(item)
# Convert metadata to DataFrame
df = pd.DataFrame(metadata)
for institution in df["Institution"].unique():
# Load metadata
df_inst = df[df["Institution"] == institution]
# Save metadata to CSV files
path_metadata = output_root / institution / dir_config[config]["metadata"]
path_metadata.mkdir(parents=True, exist_ok=True)
df_anno = df_inst.drop(columns=["Institution", "Split", "Fold"])
df_anno.to_csv(path_metadata / "annotation.csv", index=False)
df_split = df_inst[["UID", "Split", "Fold"]]
df_split.to_csv(path_metadata / "split.csv", index=False)
print("Dataset streamed and saved successfully!")
- Downloads last month
- 3,342