license: apache-2.0
language:
- en
tags:
- datasets
- machine-learning
- deep-learning
- physics-modeling
- scientific-ML
- material-point-method
- MPM
- smooth-particle-hydrodynamics
- SPH
- Lagrangian-dynamics
pretty_name: MPM-Verse-Large
size_categories:
- 100K<n<1M
MPM-Verse-MaterialSim-Large
Dataset Summary
This dataset contains Material-Point-Method (MPM) simulations for various materials, including water, sand, plasticine, and jelly. Each material is represented as point-clouds that evolve over time. The dataset is designed for learning and predicting MPM-based physical simulations. The dataset is rendered using five geometric models - Stanford-bunny, Spot, Dragon, Armadillo, and Blub. Each setting has 10 trajectories per object.
Supported Tasks and Leaderboards
The dataset supports tasks such as:
- Physics-informed learning
- Point-cloud sequence prediction
- Fluid and granular material modeling
- Neural simulation acceleration
Dataset Structure
Materials and Metadata
Due to the longer duration, water and sand are split into multiple files for rollout_full
and train
.
rollout_full
represents the rollout trajectory over the full-order point-cloud,
while rollout
is on a sample size of 2600.
The first 40 trajectories are used in the train set, and the remaining 10 are used in the test set.
Dataset Characteristics
Material | # of Trajectories | Duration | Time Step (dt) | Shapes | Train Sample Size |
---|---|---|---|---|---|
Water3DNCLAW | 50 | 1000 | 5e-3 | Blub, Spot, Bunny, Armadillo, Dragon | 2600 |
Sand3DNCLAW | 50 | 500 | 2.5e-3 | Blub, Spot, Bunny, Armadillo, Dragon | 2600 |
Plasticine3DNCLAW | 50 | 200 | 2.5e-3 | Blub, Spot, Bunny, Armadillo, Dragon | 2600 |
Jelly3DNCLAW | 50 | 334 | 7.5e-3 | Blub, Spot, Bunny, Armadillo, Dragon | 2600 |
Contact3DNCLAW | 50 | 600 | 2.5e-3 | Blub, Spot, Bunny | 2600 |
Dataset Files
Each dataset file is a dictionary with the following keys:
train.obj/test.pt
particle_type
(list): Indicator for material (only relevant for multimaterial simulations). Each element has shape[N]
corresponding to the number of particles in the point-cloud.position
(list): Snippet of past states, each element has shape[N, W, D]
where:N
: Sample sizeW
: Time window (6)D
: Dimension (2D or 3D)
n_particles_per_example
(list): Integer[1,]
indicating the size of the sampleN
output
(list): Ground truth for predicted state[N, D]
rollout.pt/rollout_full.pt
position
(list): Contains a list of all trajectories, where each element corresponds to a complete trajectory with shape[N, T, D]
where:N
: Number of particlesT
: Rollout durationD
: Dimension (2D or 3D)
Metadata Files
Each dataset folder contains a metadata.json
file with the following information:
bounds
(list): Boundary conditions.default_connectivity_radius
(float): Radius used within the graph neural network.vel_mean
(list): Mean velocity of the entire dataset[x, y, (z)]
for noise profiling.vel_std
(list): Standard deviation of velocity[x, y, (z)]
for noise profiling.acc_mean
(list): Mean acceleration[x, y, (z)]
for noise profiling.acc_std
(list): Standard deviation of acceleration[x, y, (z)]
for noise profiling.
Downloading the Dataset
from huggingface_hub import hf_hub_download, snapshot_download
files = ['train.obj', 'test.pt', 'rollout.pt', 'metadata.json', 'rollout_full.pt']
train_dir = hf_hub_download(repo_id=params.dataset_rootdir, repo_type='dataset', filename=os.path.join('Jelly3DNCLAW', files[0]), cache_dir="./dataset_mpmverse")
test_dir = hf_hub_download(repo_id=params.dataset_rootdir, repo_type='dataset', filename=os.path.join('Jelly3DNCLAW', files[1]), cache_dir="./dataset_mpmverse")
rollout_dir = hf_hub_download(repo_id=params.dataset_rootdir, repo_type='dataset', filename=os.path.join('Jelly3DNCLAW', files[2]), cache_dir="./dataset_mpmverse")
metadata_dir = hf_hub_download(repo_id=params.dataset_rootdir, repo_type='dataset', filename=os.path.join('Jelly3DNCLAW', files[3]), cache_dir="./dataset_mpmverse")
rollout_full_dir = hf_hub_download(repo_id=params.dataset_rootdir, repo_type='dataset', filename=os.path.join('Jelly3DNCLAW', files[4]), cache_dir="./dataset_mpmverse")
Processing Train
import torch
import pickle
with open("path/to/train.obj", "rb") as f:
data = pickle.load(f)
positions = data["position"][0]
print(positions.shape) # Example output: (N, W, D)
Processing Rollout
import torch
import pickle
with open("path/to/rollout_full.obj", "rb") as f:
data = pickle.load(f)
positions = data["position"]
print(len(positions)) # Example output: 50
print(positions.shape) # Example output: (N, T, 3)
Example Simulations
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Citation
If you use this dataset, please cite:
@article{viswanath2024reduced,
title={Reduced-Order Neural Operators: Learning Lagrangian Dynamics on Highly Sparse Graphs},
author={Viswanath, Hrishikesh and Chang, Yue and Berner, Julius and Chen, Peter Yichen and Bera, Aniket},
journal={arXiv preprint arXiv:2407.03925},
year={2024}
}
Source
The 3D datasets (e.g., Water3D, Sand3D, Plasticine3D, Jelly3D, RigidCollision3D, Melting3D) were generated using the NCLAW Simulator, developed by Ma et al. (ICML 2023).