YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

MACE

Reference

Ilyes Batatia, Dávid Péter Kovács, Gregor N. C. Simm, Christoph Ortner, and Gábor Csányi. Mace: Higher order equivariant message passing neural networks for fast and accurate force fields, 2023.

URL: https://arxiv.org/abs/2206.07697

How to Use

For complete usage instructions and more information, please refer to our documentation

Model architecture

Parameter Value Description
num_layers 2 Number of MACE layers.
num_channels 128 Number of channels.
l_max 3 Highest degree of spherical harmonics.
node_symmetry 3 Highest degree of node features kept after the node-wise power expansion of features.
correlation 2 Maximum correlation order.
readout_irreps ["16x0e","0e"] Irreps for the readout block.
num_readout_heads 1 Number of readout heads.
include_pseudotensors False Whether to include pseudo-tensors.
num_bessel 8 Number of Bessel basis functions.
activation silu The activation function used in the non-linear readout block.
radial_envelope polynomial_envelope The radial envelope function.
symmetric_tensor_product_basis False Whether to use a symmetric tensor product basis.
atomic_energies average Treatment of the atomic energies.
avg_um_neighbors None Mean number of neighbors.

For more information about MACE hyperparameters, please refer to our documentation

Training

Training is performed over 220 epochs, with an exponential moving average (EMA) decay rate of 0.99. The model employs a MSE loss function with scheduled weights for the energy and force components. Initially, the energy term is weighted at 40 and the force term at 1000. At epoch 115, these weights are flipped. We use our default MLIP optimizer in v1.0.0 with the following settings:

Parameter Value Description
init_learning_rate 0.01 Initial learning rate.
peak_learning_rate 0.01 Peak learning rate.
final_learning_rate 0.01 Final learning rate.
weight_decay 0 Weight decay.
warmup_steps 4000 Number of optimizer warm-up steps.
transition_steps 360000 Number of optimizer transition steps.
grad_norm 500 Gradient norm used for gradient clipping.
num_gradient_accumulation_steps 1 Steps to accumulate before taking an optimizer step.

For more information about the optimizer, please refer to our documentation

Dataset

Parameter Value Description
graph_cutoff_angstrom 5 Graph cutoff distance (in Å).
max_n_node 32 Maximum number of nodes allowed in a batch.
max_n_edge 288 Maximum number of edges allowed in a batch.
batch_size 64 Number of graphs in a batch.

This model was trained on the SPICE2_curated dataset. For more information about dataset configuration please refer to our documentation

License summary

  1. The Licensed Models are only available under this License for Non-Commercial Purposes.
  2. You are permitted to reproduce, publish, share and adapt the Output generated by the Licensed Model only for Non-Commercial Purposes and in accordance with this License.
  3. You may not use the Licensed Models or any of its Outputs in connection with:
    1. any Commercial Purposes, unless agreed by Us under a separate licence;
    2. to train, improve or otherwise influence the functionality or performance of any other third-party derivative model that is commercial or intended for a Commercial Purpose and is similar to the Licensed Models;
    3. to create models distilled or derived from the Outputs of the Licensed Models, unless such models are for Non-Commercial Purposes and open-sourced under the same license as the Licensed Models; or
    4. in violation of any applicable laws and regulations.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including InstaDeepAI/mace-organics