YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

ViSNet

Reference

Yusong Wang, Tong Wang, Shaoning Li, Xinheng He, Mingyu Li, Zun Wang, Nanning Zheng, Bin Shao, and Tie-Yan Liu. Enhancing geometric representations for molecules with equivariant vector-scalar interactive message passing. Nature Communications, 15(1), January 2024. ISSN: 2041-1723. URL: https://dx.doi.org/10.1038/s41467-023-43720-2.

How to Use

For complete usage instructions and more information, please refer to our documentation

Model architecture

Parameter Value Description
num_layers 4 Number of ViSNet layers.
num_channels 128 Number of channels.
l_max 2 Highest harmonic order included in the Spherical Harmonics series.
num_heads 8 Number of heads in the attention block.
num_rbf 32 Number of radial basis functions in the embedding block.
trainable_rbf False Whether to add learnable weights to the radial embedding basis functions.
activation silu Activation function for the output block.
attn_activation silu Activation function for the attention block.
vecnorm_type None Type of the vector norm.
atomic_energies average Treatment of the atomic energies.
avg_um_neighbors None Mean number of neighbors.

For more information about ViSNet hyperparameters, please refer to our documentation

Training

Training is performed over 220 epochs, with an exponential moving average (EMA) decay rate of 0.99. The model employs a Huber loss function with scheduled weights for the energy and force components. Initially, the energy term is weighted at 40 and the force term at 1000. At epoch 115, these weights are flipped. We use our default MLIP optimizer in v1.0.0 with the following settings:

Parameter Value Description
init_learning_rate 0.0001 Initial learning rate.
peak_learning_rate 0.0001 Peak learning rate.
final_learning_rate 0.0001 Final learning rate.
weight_decay 0 Weight decay.
warmup_steps 4000 Number of optimizer warm-up steps.
transition_steps 360000 Number of optimizer transition steps.
grad_norm 500 Gradient norm used for gradient clipping.
num_gradient_accumulation_steps 1 Steps to accumulate before taking an optimizer step.

For more information about the optimizer, please refer to our documentation

Dataset

Parameter Value Description
graph_cutoff_angstrom 5 Graph cutoff distance (in Ã…).
max_n_node 32 Maximum number of nodes allowed in a batch.
max_n_edge 288 Maximum number of edges allowed in a batch.
batch_size 16 Number of graphs in a batch.

This model was trained on the SPICE2_curated dataset. For more information about dataset configuration please refer to our documentation

License summary

  1. The Licensed Models are only available under this License for Non-Commercial Purposes.
  2. You are permitted to reproduce, publish, share and adapt the Output generated by the Licensed Model only for Non-Commercial Purposes and in accordance with this License.
  3. You may not use the Licensed Models or any of its Outputs in connection with:
    1. any Commercial Purposes, unless agreed by Us under a separate licence;
    2. to train, improve or otherwise influence the functionality or performance of any other third-party derivative model that is commercial or intended for a Commercial Purpose and is similar to the Licensed Models;
    3. to create models distilled or derived from the Outputs of the Licensed Models, unless such models are for Non-Commercial Purposes and open-sourced under the same license as the Licensed Models; or
    4. in violation of any applicable laws and regulations.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including InstaDeepAI/visnet-organics