Dataset Viewer
Auto-converted to Parquet
id
stringlengths
25
96
input
stringlengths
137
1.08M
output
stringlengths
501
1.6k
instruction
stringclasses
5 values
arxiv/05070982_6195_461c_a159_ac118928bee6.md
Spatiotemporal Weather Data Predictions with Shortcut Recurrent-Convolutional Networks: A Solution for the Weather4cast challenge Jussi Leinonen 1Federal Office of Meteorology and Climatology MeteoSwis, Locarno-Monti, Switzerland ###### weather, satellite data, neural networks, gated recurrent units + Footnote †: 2 were provided for regions R1-R3, constituting the \"Core\" competition. Meanwhile, R4-R6 only had test data available, meaning that they had to be evaluated using models trained on R1-R3; this was called the \"Transfer Learning\" competition. Furthermore, all regions had a set of \"held-out\" data which were made available only during the final week of the competition; the final results were based on the performance with these data. The performance of the models was evaluated using the mean-square error (MSE) for each variable. However, some adjustments were made to the MSE to account for the particular needs of each variable, except for _crr_intensity_. First, the loss for _temperature_ was modified to account for varying amounts of missing data in each region. Second, _asii_turb_trop_prob_ is a probabilistic variable and the output of the model was passed through a truncated and normalized logit transform before the evaluation of the MSE. Third, although _cma_ is technically evaluated using the MSE, the variable in the output data file is required to be quantized such that the value is either \\(0\\) or \\(1\\); therefore, model output values \\(<0.5\\) are rounded to \\(0\\) and outputs \\(\\geq 0.5\\) are rounded to \\(1\\) before evaluation. The details of the metrics can be found in [6]. ## 3 Solution ### Models The model presented here is a neural network combining recurrent-convolutional layers and shortcut connections in an encoder-forecaster architecture. The architecture is presented in Fig. 1. It is based on that developed in [7] for precipitation nowcasting and adopted by [8], as well as similar to that of [9], with some differences that are described below. The encoder section consists of four recurrent downsampling stages. Each stage first passes the sequence through a residual block [10], with each frame processed using the same convolutional filters. A strided convolution in the residual block is used to downsample the input by a factor of \\(2\\). Then, the sequence is processed by a gated recurrent unit (GRU) layer [11]; a tensor of zeros is passed as the initial state of the GRU. The number of channels in the convolutions is increased with increasing depth in the encoder. The forecaster section is approximately a mirror image of the encoder section. Each stage consists of a GRU layer which is followed by bilinear upsampling and a residual block. A shortcut similar to U-Net [12] is utilized: The final state of each GRU in the encoder is passed through a convolution and then used as the initial state of the GRU of corresponding depth in the forecaster. This allows the high-resolution features of the recent frames to be passed through, preventing the first predictions from being blurry. A final projection and a sigmoid activation produce the output as a single variable constrained between \\(0\\) and \\(1\\). The main difference of the architecture presented here to that of [7] is that the use of Trajectory GRU (TrajGRU) is rejected as TrajGRU was found to cause training instability. Two variants are considered instead. The first utilizes the Convolutional GRU (ConvGRU) layer adopted by e.g. [9, 13, 14]. In the second variant, the convolution in the ConvGRU is replaced by a residual block modified to be used for this purpose. The use of the residual block increases the depth of the operations in the GRU and is thus expected to allow it to better process nonlinear transformations and also to increase the Figure 1: Illustration of the network architecture. distance at which pixels can influence each other at each step of the ConvGRU. The latter effect may recover some of the advantages of TrajGRU over ConvGRU that [7] found. The author is unaware of previously published instances of a residual layer being used in place of the convolution in GRU. In this paper, this variant is called \"ResGRU\", although the same abbreviation was used for a different combination of GRUs and residual connections in [15]. The models were implemented using TensorFlow/Keras [16] version 2.4. The source code and the pre-trained models can be found through the links in Appendix A. ### Training Since the scores for the target variables were evaluated independently from each other, a separate instance of the model was trained for each target variable, but using all variables as inputs for each model. The models were trained on the training dataset of R1-R3 such that every available gapless sequence of \\(36\\) frames was used for training, resulting in \\(72192\\) different sequences (albeit with considerable overlap). The training was performed with combined data from all regions R1-R3 in order to increase the training dataset size and improve the ability of the model to generalize; specializing the model to single regions was not attempted. The static data (latitude, longitude and elevation) were also used for training. Data augmentation by random rotation in \\(90\\)deg increments as well as random top-down and left-right mirroring was used to further increase the effective number of training samples. The model for _asii_turb_trop_prob_ was trained using a custom logit loss corresponding to the metric specified in [6], while the other variables were trained using the standard MSE loss. The Adam optimizer [17] was used to train the models with a batch size of \\(32\\). The progress of the training was evaluated using the provided validation dataset for R1-R3. After each training epoch, the evaluation metric was computed on the validation set and then: 1. If the metric improved upon the best evaluation result, the model weights were saved. 2. If the metric had not improved in \\(3\\) epochs, the learning rate was reduced by a factor of \\(5\\). 3. If the metric had not improved in \\(10\\) epochs, the training was stopped early. In practice, condition \\(3\\) was never activated as the model continued to achieve marginal gains on the validation data at least every few epochs until the maximum training time of \\(12\\) h or \\(24\\) h (depending on the training run) was reached. This suggests that the model did not suffer significantly from overfitting, which typically causes the validation loss to start increasing even as the training loss keeps decreasing. This is perhaps due to the relatively modest number of weights in the models by standards of modern ConvNets, approximately \\(12.1\\) million weights in the ConvGRU variant and \\(18.6\\) million in the ResGRU variant. The loss over the validation set was used as the metric for each variable except _cma_ for which a rounded MSE that takes the \\(0,1\\) quantization into account was used. A parallel setup of eight Nvidia Tesla V100 GPUs was used to train the models. Training for one epoch took approximately 20 minutes with this hardware. The eight parallel GPUs only provided a speedup of a factor of approximately \\(3\\) compared to training on a single GPU, suggesting that single-GPU training of the models should be feasible, although the batch size would likely have to be reduced as the models require rather large amounts of GPU memory. ## 4 Results Both the ConvGRU and ResGRU variants of the model were trained for each target variable. The evaluation results for the validation dataset are shown in Table 1. Comparisons to TrajGRU were found impractical as the models using TrajGRU would not converge properly due to the training instability mentioned in Sect. 3.1. Based on the evaluation results, three submissions were made to the final leaderboards of Weather4cast Stage 1: one using the ConvGRU variant for all variables (codenamed V4c), another using ResGRU (V4rc), and a third using the best model for each variable based on the validation metrics (V4pc). It was indeed this last model that produced the best results also on the leaderboards for both the Core and Transfer Learning competitions, as shown in Table 2. Figures 2-5 show examples of the predictions using the validation dataset. These are all shown for the same scene except for Fig. 3, where a different scene was chosen because the one used for the others did not contain precipitation. It is clear that the predictions start relatively sharp and get blurrier over time, reflecting the increasing uncertainty. The blurriness is likely exacerbated by the use of the MSE metric, specified in the data challenge, which is prone to regression to the mean. Especially in Fig. 4, one can also see that the model can predict the motion of features in the images. ## 5 Conclusions The model presented here reached the top of the final leaderboards in both the Core and the Transfer Learning categories of the Weather4cast 2021 Challenge Stage 1. It is a versatile solution to the problem of predicting the evolution of atmospheric fields, producing sharp predictions for the near term and increasing the uncertainty for \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline & _temperature_ & _crr\\_intensity_ & _asii\\_turb\\_trop\\_prob_ & _cma_ \\\\ \\hline ConvGRU & \\(0.004564\\) & \\(\\mathbf{0.0001259}\\) & \\(0.002250\\) & \\(0.1393\\) \\\\ ResGRU & \\(\\mathbf{0.004356}\\) & \\(0.0001278\\) & \\(\\mathbf{0.002161}\\) & \\(\\mathbf{0.1376}\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Evaluation metrics for the validation dataset. Figure 4: As Fig. 2, but for _asii\\_turb\\_trop\\_prob_. Figure 3: As Fig. 2, but for _crr\\_intensity_. A different case is shown as the case of Fig. 2 does not contain precipitation. Figure 2: An example of predictions for the _temperature_ variable. The frames on the left correspond to past temperature, while the frames on the right show the real future temperature (top row) and the predicted temperature (bottom row). The \\(T\\) coordinate refers to the index of the frame in the sequence, with \\(T=0\\) represents the last input data point and \\(T=1\\) the first prediction. The model output normalized to the range \\((0,1)\\) is shown. longer lead times. The architecture can be easily adapted to other tasks such as probabilistic predictions or outputs that are different from the inputs. Further research is needed to handle, for instance, different spatial and temporal resolutions of inputs and data available for future time steps. ## Acknowledgments This project benefited from parallel development in the fellowship \"Seamless Artificially Intelligent Thunderstorm Nowcasts\" from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT). The hosting institution of this fellowship is MeteoSwiss in Switzerland. The author thanks U. Hamann and A. Rigazzi for discussions regarding the model and training. ## References * Bauer et al. [2015] P. Bauer, A. Thorpe, G. Brunet, The quiet revolution of numerical weather prediction, Nature 525 (2015) 47-55. doi:org/10.1038/nature14956. * McGovern et al. [2017] A. McGovern, K. L. Elmore, D. J. Gagne, II, S. E. Haupt, C. D. Karstens, R. Lagerquist, T. Smith, J. K. Williams, Using artificial intelligence to improve real-time decision-making for high-impact weather, Bull. Amer. Meteor. Soc. 98 (2017) 2073-2090. doi:10.1175/BAMS-D-16-0123.1. * Reichstein et al. [2019] M. Reichstein, G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalho, Prabhat, Deep learning and process understanding for data-driven earth system science, Nature 566 (2019) 195-204. doi:10.1038/s41586-019-0912-1. * Huntingford et al. [2019] C. Huntingford, E. S. Jeffers, M. B. Bonsall, H. M. Christensen, T. Lees, H. Yang, Machine learning and artificial intelligence to aid climate change research and preparedness, Environmental Research Letters 14 (2019) 124007. doi:10.1088/1748-9326/ab4e55. * Haupt et al. [2021] S. E. Haupt, W. Chapman, S. V. Adams, C. Kirkwood, J. S. Hosking, N. H. Robinson, S. Lerch, A. C. Subramanian, Towards implementing artificial intelligence post-processing in weather and climate: proposed actions from the oxford 2019 workshop, Philos. Trans. R. Soc. London, Ser. A 379 (2021) 20200091. doi:10.1098/rsta.2020.0091. * IARAI [2021] IARAI, Weather4cast 2021: Competition metrics, 2021. URL: [https://www.iarai.ac.at/weather4cast/wp-content/uploads/sites/3/2021/04/w4c.pdf](https://www.iarai.ac.at/weather4cast/wp-content/uploads/sites/3/2021/04/w4c.pdf). * Shi et al. [2017] X. Shi, Z. Gao, L. Lausen, H. Wang, D.-Y. Yeung, W.-k. Wong, W.-c. WOO, Deep learning for precipitation nowcasting: A benchmark and a new model, in: I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (Eds.), Advances in Neural Information Processing Systems, volume 30, Curran Associates, Inc., 2017. URL: [https://proceedings.neurips.cc/paper/2017/file/a6db4ed04f1621a119799fd3d7545d3d-Paper.pdf](https://proceedings.neurips.cc/paper/2017/file/a6db4ed04f1621a119799fd3d7545d3d-Paper.pdf). * Franch et al. [2020] G. Franch, D. Nerini, M. Pendesini, L. Coviello, G. Jurman, C. Furlanello, Precipitation nowcasting with orographic enhanced stacked generalization: Improving deep learning predictions on extreme events, Atmosphere 11 (2020). doi:10.3390/atmos11030267. \\begin{table} \\begin{tabular}{c c c} \\hline \\hline & Core & Transfer learning \\\\ \\hline ConvGRU & \\(0.5051\\) & \\(0.4658\\) \\\\ ResGRU & \\(0.5014\\) & \\(0.4626\\) \\\\ Best combination & \\(\\mathbf{0.4987}\\) & \\(\\mathbf{0.4607}\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Evaluation metrics for the held-out test dataset, as computed by the Weather4cast website ([https://www.iarai.ac.at/weather4cast/](https://www.iarai.ac.at/weather4cast/)). Figure 5: As Fig. 2, but for _cma_. The white contours in the predictions indicate \\(0.5\\), the threshold of the cloud mask in the output. * [9] S. Ravuri, K. Lenc, M. Willson, D. Kangin, R. Lam, P. Mirowski, M. Fitzsimons, M. Athanassiadou, S. Kashem, S. Madge, R. Prudden, A. Mandhane, A. Clark, A. Brock, K. Simonyan, R. Hadsell, N. Robinson, E. Clancy, A. Arribas, S. Mohamed, Skillful precipitation nowcasting using deep generative models of radar, 2021. arXiv:2104.00954. * [10] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. doi:10.1109/CVPR.2016.90. * [11] K. Cho, B. van Merrienboer, D. Bahdanau, Y. Bengio, On the properties of neural machine translation: Encoder-decoder approaches, in: Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, 2014, pp. 103-111. * MICCAI 2015, 2015, pp. 234-241. doi:10.1007/978-3-319-24574-4_28. * [13] L. Tian, X. Li, Y. Ye, P. Xie, Y. Li, A generative adversarial gated recurrent unit model for precipitation nowcasting 17 (2020) 601-605. doi:10.1109/LGRS.2019.2926776. * [14] J. Leinonen, D. Nerini, A. Berne, Stochastic super-resolution for downscaling time-evolving atmospheric fields with a generative adversarial network, IEEE Trans. Geosci. Remote Sens. (2020). doi:10.1109/TGRS.2020.3032790. * [15] W. Gao, R.-J. Wai, A novel fault identification method for photovoltaic array via convolutional neural network and residual gated recurrent unit, IEEE Access 8 (2020) 159493-159510. doi:10.1109/ACCESS.2020.3020296. * [16] F. Chollet, et al., Keras, [https://keras.io](https://keras.io), 2015. * [17] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, in: 3rd International Conference for Learning Representations, San Diego, California, USA, 2014. URL: [https://arxiv.org/abs/1412.6980](https://arxiv.org/abs/1412.6980). * [18] J. Leinonen, Model weights for a Weather4cast 2021 Challenge Stage 1 solution, 2021. doi:10.5281/zenodo.5101213. ## Appendix A Online Resources The source code with instructions to replicate the results presented in this paper can be found at [https://github.com/jleinonen/weather4cast-stage1](https://github.com/jleinonen/weather4cast-stage1). The model weights used in the challenge submissions can be downloaded at [18].
This paper presents the neural network model that was used by the author in the Weather4cast 2021 Challenge Stage 1, where the objective was to predict the time evolution of satellite-based weather data images. The network is based on an encoder-forecaster architecture making use of gated recurrent units (GRU), residual blocks and a contracting/expanding architecture with shortcuts similar to U-Net. A GRU variant utilizing residual blocks in place of convolutions is also introduced. Example predictions and evaluation metrics for the model are presented. These demonstrate that the model can retain sharp features of the input for the first predictions, while the later predictions become more blurred to reflect the increasing uncertainty.
Give a concise overview of the text below.
arxiv/20ea3d07_7db2_402a_b276_44d5ea43c701.md
# PreDiff: Precipitation Nowcasting with Latent Diffusion Models Zhihan Gao The Hong Kong University of Science and Technology [email protected] &Xingjian Shi Boson AI [email protected] &Boran Han AWS [email protected] &Hao Wang AWS AI Labs [email protected] &Xiaoyong Jin Amazon [email protected] &Danielle Maddix AWS AI Labs [email protected] &Yi Zhu Boson AI [email protected] &Mu Li Boson AI [email protected] &Yuyang Wang AWS AI Labs [email protected] Work conducted during an internship at Amazon.Work conducted while at Amazon. ## 1 Introduction Earth's intricate climate system significantly influences daily life. Precipitation nowcasting, tasked with delivering accurate rainfall forecasts for the near future (e.g., 0-6 hours), is vital for decision-making across numerous industries and services. Recent advancements in data-driven deep learning (DL) techniques have demonstrated promising potential in this field, rivaling conventional numerical methods [8; 5] with their advantages of being more skillful [5], efficient [37], and scalable [3]. However, accurately predicting the future rainfall remains challenging for data-driven algorithms. The state-of-the-art Earth system forecasting algorithms [47; 61; 41; 37; 8; 69; 2; 29; 3] typically generate blurry predictions. This is caused by the high variability and complexity inherent to Earth's climatic system. Even minor differences in initial conditions can lead to vastly divergent outcomes that are difficult to predict. Most methods adopt a point estimation of the future rainfall and are trained by minimizing pixel-wise loss functions (e.g., mean-squared error). These methods lack the capability of capturing multiple plausible futures and will generate blurry forecasts which lose important operational details. Therefore, what are needed instead are probabilistic models that can represent the uncertainty inherent in stochastic systems. The probabilistic models can capture multiple plausible futures, generating diverse high-quality predictions that better align with real-world data. The emergence of diffusion models (DMs) [22] has enabled powerful probabilistic frameworks for generative modeling. DMs have shown remarkable capabilities in generating high-quality images [40; 45; 43] and videos [15; 23]. As a likelihood-based model, DMs do not exhibit mode collapse or training instabilities like GANs [10]. Compared to autoregressive (AR) models [53; 46; 63; 39; 65] that generate images pixel-by-pixel, DMs can produce higher resolution images faster and with higher quality. They are also better at handling uncertainty [62; 57; 58; 59; 34] without drawbacks like exposure bias [13] in AR models. Latent diffusion models (LDMs) [52; 42] further improve on DMs by separating the model into two phases, only applying the costly diffusion in a compressed latent space. This alleviates the computational costs of DMs without significantly impairing performance. Despite DMs' success in image and video generation [56; 42; 15; 66; 32; 32], its application to precipitation nowcasting and Earth system forecasting is in early stages [16]. One of the major concerns is that this purely data-centric approach lacks constraints and controls from prior knowledge about the dynamic system. Some spatiotemporal forecasting approaches have incorporated domain knowledge by modifying the model architecture or adding extra training losses [11; 1; 37]. This enables them to be aware of prior knowledge and generate physically plausible forecasts. However, these approaches still face challenges, such as requiring to design new model architectures or retrain the entire model from scratch when constraints change. More detailed discussions on related works are provided in Appendix A. Inspired by recent success in controllable generative models [68; 24; 4; 33; 6], we propose a general two-stage pipeline for training data-driven Earth system forecasting model. 1) In the first stage, we focus on capturing the intrinsic semantics in the data by training an LDM. To capture Earth's long-term and complex changes, we instantiate the LDM's core neural network as a UNet-style architecture based on Earthformer [8]. 2) In the second stage, we inject prior knowledge of the Earth system by training a knowledge alignment network that guides the sampling process of the LDM. Specifically the alignment network parameterizes an energy function that adjusts the transition probabilities during each denoising step. This encourages the generation of physically plausible intermediate latent states while suppressing those likely to violate the given domain knowledge. We summarize our main contributions as follows: * We introduce a novel LDM based model _PreDiff_ for precipitation nowcasting. * We propose a general two-stage pipeline for training data-driven Earth system forecasting models. Specifically, we develop _knowledge alignment_ mechanism to guide the sampling process of PreDiff. This mechanism ensures that the generated predictions align with domain-specific prior knowledge better, thereby enhancing the reliability of the forecasts, without requiring any modifications to the trained PreDiff model. * Our method achieves state-of-the-art performance on the \\(N\\)-body MNIST [8] dataset and attains state-of-the-art perceptual quality on the SEVIR [55] dataset. ## 2 Method We follow [47; 48; 55; 1; 8] to formulate precipitation nowcasting as a spatiotemporal forecasting problem. The \\(L_{\\text{in}}\\)-step observation is represented as a spatiotemporal sequence \\(y=[y^{j}]_{j=1}^{L_{\\text{in}}}\\in\\mathbb{R}^{L_{\\text{in}}\\times H\\times W \\times C}\\), where \\(H\\) and \\(W\\) denote the spatial resolution, and \\(C\\) denotes the number of measurements at each space-time coordinate. Probabilistic forecasting aims to model the conditional probabilistic distribution \\(p(x|y)\\) of the \\(L_{\\text{out}}\\)-step-ahead future \\(x=[x^{j}]_{j=1}^{L_{\\text{out}}}\\in\\mathbb{R}^{L_{\\text{out}}\\times H\\times W \\times C}\\), given the observation \\(y\\). In what follows, we will present the parameterization of \\(p(x|y)\\) by a controllable LDM. ### Preliminary: Diffusion Models Diffusion models (DMs) learn the data distribution \\(p(x)\\) by training a model to reverse a predefined noising process that progressively corrupts the data. Specifically, the noising process is defined as \\(q(x_{t}|x_{t-1})=\\mathcal{N}(x_{t};\\sqrt{\\alpha_{t}}x_{t-1},(1-\\alpha_{t})I),1 \\leq t\\leq T\\), where \\(x_{0}\\sim p(x)\\) is the true data, and \\(x_{T}\\sim\\mathcal{N}(0,1)\\) is random noise. The coefficients \\(\\alpha_{t}\\) follow a fixed schedule over the timesteps \\(t\\). DMs factorize and parameterize the joint distribution over the data \\(x_{0}\\) and noisy latents \\(x_{i}\\) as \\(p_{\\theta}(x_{0:T})=p(x_{T})\\prod_{t=1}^{T}p_{\\theta}(x_{t-1}|x_{t})\\), where each step of the reverse denoising process is a Gaussian distribution \\(p_{\\theta}(x_{t-1}|x_{t})=\\mathcal{N}(\\mu_{\\theta}(x_{t},t),\\Sigma_{\\theta}(x _{t},t))\\), which is trained to recover \\(x_{t-1}\\) from \\(x_{t}\\). To apply DMs for spatiotemporal forecasting, \\(p(x|y)\\) is factorized and parameterized as \\(p_{\\theta}(x|y)=\\int p_{\\theta}(x_{0:T}|y)dx_{1:T}=\\int p(x_{T})\\prod_{t=1}^{ T}p_{\\theta}(x_{t-1}|x_{t},y)dx_{1:T}\\), where \\(p_{\\theta}(x_{t-1}|x_{t},y)\\) represents the conditional denoising transition with the condition \\(y\\). ### Conditional Diffusion in Latent Space To improve the computational efficiency of DM training and inference, our _PreDiff_ follows LDM to adopt a two-phase training that leverages the benefits of lower-dimensional latent representations. The two sequential phases of the PreDiff training are: 1) Training a frame-wise variational autoencoder (VAE) [28] that encodes pixel space into a lower-dimensional latent space, and 2) Training a conditional DM that generates predictions in this acquired latent space. Frame-wise autoencoder.We follow [7] to train a frame autoencoder using a combination of the pixel-wise loss (e.g. L2 loss) and an adversarial loss. Different from [7], we exclude the perceptual loss since there are no standard pretrained models for perception on Earth observation data. Specifically, the encoder \\(\\mathcal{E}\\) is trained to encode a data frame \\(x^{j}\\in\\mathbb{R}^{H\\times W\\times C}\\) to a latent representation \\(z^{j}=\\mathcal{E}(x^{j})\\in\\mathbb{R}^{H_{x}\\times W_{x}\\times C_{x}}\\). The decoder \\(\\mathcal{D}\\) learns to reconstruct the data frame \\(\\widehat{x}^{j}=\\mathcal{D}(z^{j})\\) from Figure 1: **Overview of PreDiff inference with knowledge alignment.** An observation sequence \\(y\\) is encoded into a latent context \\(z_{\\text{cond}}\\) by the frame-wise encoder \\(\\mathcal{E}\\). The latent diffusion model \\(p_{\\theta}(z_{t}|z_{t+1},z_{\\text{cond}})\\), which is parameterized by an Earthformer-UNet, then generates the latent future \\(z_{0}\\) by autoregressively denoising Gaussian noise \\(z_{T}\\) conditioned on \\(z_{\\text{cond}}\\). It takes the concatenation of the latent context \\(z_{\\text{cond}}\\) (in the blue border) and the previous-step noisy latent future \\(z_{t+1}\\) (in the cyan border) as input, and outputs \\(z_{t}\\). The transition distribution of each step from \\(z_{t+1}\\) to \\(z_{t}\\) can be further refined as \\(p_{\\theta,\\phi}(z_{t}|z_{t+1},y,\\mathcal{F}_{0})\\) via knowledge alignment, according to auxiliary prior knowledge. This denoising process iterates from \\(t=T\\) to \\(t=0\\), resulting in a denoised latent future \\(z_{0}\\). Finally, \\(z_{0}\\) is decoded back to pixel space by the frame-wise decoder \\(\\mathcal{D}\\) to produce the final prediction \\(\\widehat{x}\\). (Best viewed in color). the encoded latent. We denote \\(z\\sim p_{\\mathcal{E}}(z|x)\\in\\mathbb{R}^{L\\times H_{z}\\times W_{z}\\times C_{z}}\\) as equivalent to \\(z=[z^{j}]=[\\mathcal{E}(x^{j})]\\), representing encoding a sequence of frames in pixel space into a latent spatiotemporal sequence. And \\(x\\sim p_{\\mathcal{D}}(x|z)\\) denotes decoding a latent spatiotemporal sequence. Latent diffusion.With the context \\(y\\) being encoded by the frame-wise encoder \\(\\mathcal{E}\\) into the learned latent space as \\(z_{\\text{cond}}\\in\\mathbb{R}^{L_{\\text{in}}\\times H_{z}\\times W_{z}\\times C_{z}}\\) as (1). The conditional distribution \\(p_{\\theta}(z_{0:T}|z_{\\text{cond}})\\) of the latent future \\(z_{i}\\in\\mathbb{R}^{L_{\\text{out}}\\times H_{z}\\times W_{z}\\times C_{z}}\\) given \\(z_{\\text{cond}}\\) is factorized and parameterized as (2): \\[z_{\\text{cond}} \\sim p_{\\mathcal{E}}(z_{\\text{cond}}|y), \\tag{1}\\] \\[p_{\\theta}(z_{0:T}|z_{\\text{cond}}) =p(z_{T})\\prod_{t=1}^{T}p_{\\theta}(z_{t-1}|z_{t},z_{\\text{cond}}). \\tag{2}\\] where \\(z_{T}\\sim p(z_{T})=\\mathcal{N}(0,I)\\). As proposed by [22; 45],an equivalent parameterization is to have the DMs learn to match the transition noise \\(\\epsilon_{\\theta}(z_{t},t)\\) of step \\(t\\) instead of directly predicting \\(z_{t-1}\\). The training objective of PreDiff is simplified as shown in (3): \\[L_{\\text{CLDM}}=\\mathbb{E}_{(x,y),t,\\epsilon\\sim\\mathcal{N}(0,I)}\\|\\epsilon- \\epsilon_{\\theta}(z_{t},t,z_{\\text{cond}})\\|_{2}^{2}. \\tag{3}\\] where \\((x,y)\\) is a sampled context sequence and target sequence data pair, and given that, \\(z_{t}\\sim q(z_{t}|z_{0})p_{\\mathcal{E}}(z_{0}|x)\\) and \\(z_{\\text{cond}}\\sim p_{\\mathcal{E}}(z_{\\text{cond}}|y)\\). Instantiating \\(p_{\\theta}(z_{t-1}|z_{t},z_{\\text{cond}})\\).Compared to images, modeling spatiotemporal observation data in precipitation nowcasting poses greater challenges due to their higher dimensionality. We propose replacing the UNet backbone in LDM [42] with _Earthformer-UNet_, derived from Earthformer's encoder [8], which is known for its ability to model intricate and extensive spatiotemporal dependencies in the Earth system. Earthformer-UNet adopts a hierarchical UNet architecture with self cuboid attention [8] as the building blocks, excluding the bridging cross-attention in the encoder-decoder architecture of Earthformer. More details of the architecture design of Earthformer-UNet are provide in Appendix B.1. We find Earthformer-UNet to be more stable and effective at modeling the transition distribution \\(p_{\\theta}(z_{t-1}|z_{t},z_{\\text{cond}})\\). It takes the concatenation of the encoded latent context \\(z_{\\text{cond}}\\) and the noisy latent future \\(z_{t}\\) along the temporal dimension as input, and predicts the one-step-ahead noisy latent future \\(z_{t-1}\\) (in practice, the transition noise \\(\\epsilon\\) from \\(z_{t}\\) to \\(z_{t-1}\\) is predicted as shown in (3)). ### Incorporating Knowledge Alignment Though DMs hold great promise for diverse and realistic generation, the generated predictions may violate physical constraints, or disregard domain-specific prior knowledge, thereby fail to give plausible and non-trivial results [14; 44]. One possible reason for this is that DMs are not necessarily trained on data full compliant with domain knowledge. When trained on such data, there is no guarantee that the generations sampled from the learned distribution will remain physically realizable. The causes may also stem from the stochastic nature of chaotic systems, the approximation error in denoising steps, etc. To address this issue, we propose _knowledge alignment_ to incorporate auxiliary prior knowledge: \\[\\mathcal{F}(\\widehat{x},y)=\\mathcal{F}_{0}(y)\\in\\mathbb{R}^{d}, \\tag{4}\\] into the diffusion generation process. The knowledge alignment imposes a constraint \\(\\mathcal{F}\\) on the forecast \\(\\widehat{x}\\), optionally with the observation \\(y\\), based on domain expertise. E.g., for an isolated physical system, the knowledge \\(E(\\widehat{x},\\cdot)=E_{0}(y^{L_{\\text{in}}})\\in\\mathbb{R}\\) imposes the conservation of energy by enforcing the generation \\(\\widehat{x}\\) to keep the total energy \\(E(\\widehat{x},\\cdot)\\) the same as the last observation \\(E_{0}(y^{L_{\\text{in}}})\\). The violation \\(\\|\\mathcal{F}(\\widehat{x},y)-\\mathcal{F}_{0}(y)\\|\\) quantifies the deviation of a prediction \\(\\widehat{x}\\) from prior knowledge. The larger violation indicates \\(\\widehat{x}\\) diverges further from the constraints. Knowledge alignment hence aims to suppress the probability of generating predictions with large violation. Notice that even the target \\(t\\) futures \\(x\\) from training data may violate the knowledge, i.e. \\(\\mathcal{F}(x,y)\ eq\\mathcal{F}_{0}(y)\\), due to noise in data collection or simulation. Inspired by classifier guidance [4], we achieve knowledge alignment by training a knowledge alignment network \\(U_{\\phi}(z_{t},t,y)\\) to estimate \\(\\mathcal{F}(\\widehat{x},y)\\) from the intermediate latent \\(z_{t}\\) at noising step \\(t\\). The key idea is to adjust the transition probability distribution \\(p_{\\theta}(z_{t-1}|z_{t},z_{\\text{cond}})\\) in (2) during each latent denoising step to reduce the likelihood of sampling \\(z_{t}\\) values expected to violate the constraints: \\[p_{\\theta,\\phi}(z_{t}|z_{t+1},y,\\mathcal{F}_{0})\\propto p_{\\theta}(z_{t}|z_{t+ 1},z_{\\text{cond}})\\cdot e^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)- \\mathcal{F}_{0}(y)\\|}, \\tag{5}\\] where \\(\\lambda_{\\mathcal{F}}\\) is a guidance scale factor. The knowledge alignment network is trained by optimizing the objective \\(L_{U}\\) in Alg. 1. According to [4], (5) can be approximated by shifting the predicted mean of the denoising transition \\(\\mu_{\\theta}(z_{t+1},t,z_{\\text{cond}})\\) by \\(-\\lambda_{\\mathcal{F}}\\Sigma_{\\theta}\ abla_{z_{t}}\\|U_{\\phi}(z_{t},t,y)- \\mathcal{F}_{0}(y)\\|\\), where \\(\\Sigma_{\\theta}\\) is the variance of the original transition distribution \\(p_{\\theta}(z_{t}|z_{t+1},z_{\\text{cond}})=\\mathcal{N}(\\mu_{\\theta}(z_{t+1},t,z_{\\text{cond}}),\\Sigma_{\\theta}(z_{t+1},t,z_{\\text{cond}}))\\). Detailed derivation is provided in Appendix C. The training procedure of knowledge alignment is outlined in Alg. 1. The noisy latent \\(z_{t}\\) for training the knowledge alignment network \\(U_{\\phi}\\) is sampled by encoding the target \\(x\\) using the frame-wise encoder \\(\\mathcal{E}\\) and the forward noising process \\(q(z_{t}|z_{0})\\), eliminating the need for an inference sampling process. This makes the training of the knowledge alignment network \\(U_{\\phi}\\) independent of the LDM training. At inference time, the knowledge alignment mechanism is applied as a plug-in, without impacting the trained VAE and the LDM. This modular approach allows training lightweight knowledge alignment networks \\(U_{\\phi}\\) to flexibly explore various constraints and domain knowledge, without the need for retraining the entire model. This stands as a key advantage over incorporating constraints into model architectures or training losses. ## 3 Experiments We conduct empirical studies and compare PreDiff with other state-of-the-art spatiotemporal forecasting models on a synthetic dataset \\(N\\)-body MNIST [8] and a real-world precipitation nowcasting benchmark SEVIR2[55] to verify the effectiveness of PreDiff in handling the dynamics and uncertainty in complex spatiotemporal systems and generating high quality, accurate forecasts. We impose data-specific knowledge alignment: **energy conservation** on \\(N\\)-body MNIST and **anticipated precipitation intensity** on SEVIR. Experiments demonstrate that PreDiff under the guidance of knowledge alignment (PreDiff-KA) is able to generate predictions that comply with domain expertise much better, without severely sacrificing fidelity. Footnote 2: Dataset is available at [https://sevir.mit.edu/](https://sevir.mit.edu/) ### \\(N\\)-body MNIST Digits Motion Forecasting Dataset.The Earth is a chaotic system with complex dynamics. The real-world Earth observation data, such as radar echo maps and satellite imagery, are usually not physically complete. We are unable to directly verify whether certain domain knowledge, like conservation laws of energy and momentum, is satisfied or not. This makes it difficult to verify if a method is really capable of modeling certain dynamics and adhering to the corresponding constraints. To address this, we follow [8] to generate a synthetic dataset named \\(N\\)-body MNIST3, which is an extension of MovingMNIST [50]. The dataset contains sequences of digits moving subject to the gravitational force from other digits. The governing equation for the motion is \\(\\frac{d^{2}\\mathbf{x}_{i}}{dt^{2}}=-\\sum_{j\ eq i}\\frac{Gm_{j}(\\mathbf{x}_{i}-\\mathbf{x}_{ j})}{(|\\mathbf{x}_{i}-\\mathbf{x}_{j}|+d_{\\text{end}})}\\), where \\(\\mathbf{x}_{i}\\) is the spatial coordinates of the \\(i\\)-th digit, \\(G\\) is the gravitational constant, \\(m_{j}\\) is the mass of the \\(j\\)-th digit, \\(r\\) is a constant representing the power scale in the gravitational law, and \\(d_{\\text{soft}}\\) is a small softening distance that ensures numerical stability. The motion occurs within a \\(64\\times 64\\) frame. When a digit hits the boundaries of the frame, it bounces back by elastic collision. We use \\(N=3\\) for chaotic \\(3\\)-body motion [35]. The forecasting task is to predict \\(10\\)-step ahead future frames \\(x\\in\\mathbb{R}^{10\\times 64\\times 64\\times 1}\\) given the length-\\(10\\) context \\(y\\in\\mathbb{R}^{10\\times 64\\times 64\\times 1}\\). We generate 20,000 sequences for training and 1,000 sequences for testing. Empirical studies on such a synthetic dataset with known dynamics helps provide useful insights for model development and evaluation. Evaluation.In addition to standard metrics MSE, MAE and SSIM, we also report the scores of Frechet Video Distance (FVD) [51], a metric for evaluating the visual quality of generated videos. Similar to Frechet Inception Distance (FID) [20] for evaluating image generation, FVD estimates the distance between the learned distribution and the true data distribution by comparing the statistics of feature vectors extracted from the generations and the real data. The inception network used in FVD for feature extraction is pre-trained on video classification and is not specifically adapted for processing \"unnatural videos\" such as spatiotemporal observation data in Earth systems. Consequently, the FVD scores on the \\(N\\)-body MNIST and SEVIR datasets cannot be directly compared with those on natural video datasets. Nevertheless, the relative ranking of the FVD scores remains a meaningful indicator of model ability to achieve high visual quality, as FVD has shown consistency with expert evaluations across various domains beyond natural images [38, 26]. Scores for all involved metrics are calculated using an ensemble of eight samples from each model. #### 3.1.1 Comparison with the State of the Art We evaluate seven deterministic spatiotemporal forecasting models: **UNet**[55], **ConvLSTM**[47], **PredRNN**[61], **PhyDNet**[11], **E3D-LSTM**[60], **Rainformer**[1] and **Earthformer**[8], as well as two probabilistic spatiotemporal forecasting models: **VideoGPT**[65] and **LDM**[42]. All baselines are trained following the default configurations in their officially released code. More implementation details of baselines are provided in Appendix B.2. Results in Table 1 show that PreDiff outperforms these baselines by a large margin in both conventional video prediction metrics (i.e., MSE, MAE, SSIM), and a perceptual quality metric, FVD. The example predictions in Fig. 2 demonstrate that PreDiff generate predictions with sharp and clear digits in accurate positions. In contrast, deterministic baselines resort to generating blurry predictions to accommodate uncertainty. Probabilistic baselines, though producing sharp strokes, either predict _incorrect_ positions or _fail to reconstruct_ the digits. The performance gap between LDM [42] and PreDiff serves as an ablation study that highlights the importance of the latent backbone's spatiotemporal modeling capacity. Specifically, the Earthformer-UNet utilized in PreDiff demonstrates superior performance compared to the UNet in LDM [42]. #### 3.1.2 Knowledge Alignment: Energy Conservation In the \\(N\\)-body MNIST simulation, digits move based on Newton's law of gravity, and interact with the boundaries through elastic collisions. Consequently, this system obeys the law of conservation of Figure 2: A set of example predictions on the \\(N\\)-body MNIST test set. From top to bottom: context sequence \\(y\\), target sequence \\(x\\), predictions by ConvLSTM [47], Earthformer [8], VideoGPT [65], LDM [42], PreDiff, and PreDiff with knowledge alignment (PreDiff-KA). E.MSE denotes the average error between the total energy (kinetic \\(+\\) potential) of the predictions \\(E(\\widehat{x}^{j})\\) and the total energy of the last context frame \\(E(y^{L_{m}})\\). The red dashed line is to help the reader to judge the position of the digit “2” in the last frame. energy. The total energy of the whole system \\(E(x^{j})\\) at any future time step \\(j\\) during evolution should equal the total energy at the last observation time step \\(E(y^{L_{\\text{u}}})\\). We impose the law of conservation of energy for the knowledge alignment on \\(N\\)-body MNIST in the form of (4) : \\[\\mathcal{F}(\\widehat{x},y) \\equiv[E(\\widehat{x}^{1}),\\dots,E(\\widehat{x}^{L_{\\text{ave}}})]^ {T}, \\tag{6}\\] \\[\\mathcal{F}_{0}(y) \\equiv[E(y^{L_{\\text{u}}}),\\dots,E(y^{L_{\\text{u}}})]^{T}. \\tag{7}\\] The ground-truth values of the total energy \\(E(y^{L_{\\text{u}}})\\) and \\(E(x^{j})\\) are directly accessible since \\(N\\)-body MNIST is a synthetic dataset from simulation. The total energy can be derived from the velocities (kinetic energy) and positions (potential energy) of the moving digits. A knowledge alignment network \\(U_{\\phi}\\) is trained following Alg. 1 to guide the PreDiff to generate forecasts \\(\\widehat{x}\\) that conserve the same energy as the initial step \\(E(y^{L_{\\text{u}}})\\). To verify the effectiveness of the knowledge alignment on guiding the generations to comply with the law of conservation of energy, we train an energy detector \\(E_{\\text{det}}(\\widehat{x})\\)4 that detects the total energy of the forecasts \\(\\widehat{x}\\). We evaluate the energy error between the forecasts and the initial energy using \\(\\text{E.MSE}(\\widehat{x},y)\\equiv\\text{MSE}(E_{\\text{det}}(\\widehat{x}),E(y^{ L_{\\text{u}}}))\\) and \\(\\text{E.MAE}(\\widehat{x},y)\\equiv\\text{MAE}(E_{\\text{det}}(\\widehat{x}),E(y^{ L_{\\text{u}}}))\\). In this evaluation, we exclude the methods that generate blurred predictions with ambiguous digit positions. We only focus on the methods that are capable of producing clear digits in precise positions. Footnote 4: The test MSE of the energy detector is \\(5.56\\times 10^{-5}\\), which is much smaller than the scores of E.MSE shown in Table 1. This indicates that the energy detector has high precision and reliability for verifying energy conservation in the model forecasts. As illustrated in Table 1, PreDiff-KA substantially outperforms all baseline methods and PreDiff without knowledge alignment in E.MSE and E.MAE. This demonstrates that the forecasts of PreDiff-KA comply much better with the law of conservation of energy, while still maintaining high visual quality with an FVD score of \\(4.063\\). Furthermore, we detect energy errors in the target data sequences. The first row of Table 1 indicates that even the target from the training data may not strictly adhere to the prior knowledge. This could be due to discretization errors in the simulation. Table 1 shows that all baseline methods and PreDiff have larger energy errors than the target, meaning purely data-oriented approaches cannot eliminate the impact of noise in the training data. In contrast, PreDiff-KA, guided by the law of conservation of energy, overcomes the intrinsic defects in the training data, achieving even lower energy errors compared to the target. A typical example shown in Fig. 2 demonstrates that while PreDiff precisely reproduces the ground-truth position of digit \"2\" in the last frame (aligned to the red dashed line), resulting in nearly the same energy error (\\(\\text{E.MSE}=0.0277\\)) as the ground-truth's (\\(\\text{E.MSE}=0.0261\\)), PreDiff-KA successfully \\begin{table} \\begin{tabular}{l|c|c c c|c c c} \\hline \\hline \\multirow{2}{*}{Model} & \\multirow{2}{*}{\\#Param. (M)} & \\multicolumn{4}{c|}{Frame Metrics} & \\multicolumn{2}{c}{Energy Metrics} \\\\ & & MSE \\(\\downarrow\\) & MAE \\(\\downarrow\\) & SSIM \\(\\uparrow\\) & FVD \\(\\downarrow\\) & E.MSE \\(\\downarrow\\) & E.MAE \\(\\downarrow\\) \\\\ \\hline \\hline Target & - & 0.000 & 0.000 & 1.0000 & 0.000 & 0.0132 & 0.0697 \\\\ Persistence & - & 104.9 & 139.0 & 0.7270 & 168.3 & - & - \\\\ \\hline UNet [55] & 16.6 & 38.90 & 94.29 & 0.8260 & 142.3 & - & - \\\\ ConvLSTM [47] & 14.0 & 32.15 & 72.64 & 0.8886 & 86.31 & - & - \\\\ PredRNN [61] & 23.8 & 21.76 & 54.32 & 0.9288 & 20.65 & - & - \\\\ PhyDNet [11] & 3.1 & 28.97 & 78.66 & 0.8206 & 178.0 & - & - \\\\ E3D-LSTM [60] & 12.9 & 22.98 & 62.52 & 0.9131 & 22.28 & - & - \\\\ Rainformer [1] & 19.2 & 38.89 & 96.47 & 0.8036 & 163.5 & - & - \\\\ Earthformer [8] & 7.6 & 14.82 & 39.93 & 0.9538 & 6.798 & - & - \\\\ \\hline VideoGPT [65] & 92.2 & 53.68 & 77.42 & 0.8468 & 39.28 & 0.0228 & 0.1092 \\\\ LDM [42] & 410.3 & 46.29 & 72.19 & 0.8773 & 3.432 & 0.0243 & 0.1172 \\\\ \\hline PreDiff & 120.7 & **9.492** & **25.01** & **0.9716** & **0.9871** & 0.0226 & 0.1083 \\\\ PreDiff-KA & 129.4 & 21.90 & 43.57 & 0.9303 & 4.063 & **0.0039** & **0.0443** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Performance comparison on \\(N\\)-body MNIST. We report conventional frame quality metrics (MSE, MAE, SSIM), along with Fréchet Video Distance (FVD) [51] for assessing visual quality. Energy conservation is evaluated via E.MSE and E.MAE between the energy of predictions \\(E_{\\text{det}}(\\widehat{x})\\) and the initial energy \\(E(y^{L_{\\text{u}}})\\). Lower values on the energy metrics indicate better compliance with conservation of energy. corrects the motion of digit \"2\", providing it with physically plausible velocity and position (slightly off the red dashed line). The knowledge alignment ensures that the generation complies better with the law of conservation of energy, resulting in a much lower \\(\\text{E.MSE}=0.0086\\). On the contrary, none of the evaluated baselines can overcome the intrinsic noise from the data, resulting in energy errors comparable to or larger than that of the ground-truth. Notice that the pixel-wise scores MSE, MAE and SSIM are less meaningful for evaluating PreDiff-KA, since correcting the noise of the energy results in changing the velocities and positions of the digits. A minor change in the position of a digit can cause a large pixel-wise error, even though the digit is still generated sharply and in high quality as shown in Fig. 2. ### SEVIR Precipitation Nowcasting Dataset.The Storm EVent ImageRy (SEVIR) [55] is a spatiotemporal Earth observation dataset which consists of \\(384\\) km \\(\\times 384\\) km image sequences spanning over 4 hours. Images in SEVIR are sampled and aligned across five different data types: three channels (C02, C09, C13) from the GOES-16 advanced baseline imager, NEXRAD Vertically Integrated Liquid (VIL) mosaics, and GOES-16 Geostationary Lightning Mapper (GLM) flashes. The SEVIR benchmark supports scientific research on multiple meteorological applications including precipitation nowcasting, synthetic radar generation, front detection, etc. Due to computational resource limitations, we adopt a downsampled version of SEVIR for benchmarking precipitation nowcasting. The task is to predict the future VIL up to 60 minutes (6 frames) given 70 minutes of context VII (7 frames) at a spatial resolution of \\(128\\times 128\\), i.e. \\(x\\in\\mathbb{R}^{6\\times 128\\times 128\\times 1}\\), \\(y\\in\\mathbb{R}^{7\\times 128\\times 128\\times 1}\\). Evaluation.Following [55; 8], we adopt the Critical Success Index (CSI) for evaluation, which is commonly used in precipitation nowcasting and is defined as \\(\\texttt{CSI}=\\frac{\\#\\texttt{Bits}}{\\#\\texttt{Bits}+\\#\\texttt{Mises}+\\# \\texttt{Jarms}}\\). To count the \\(\\#\\texttt{Bits}\\) (truth=1, pred=1), \\(\\#\\texttt{Misses}\\) (truth=1, pred=0) and \\(\\#\\texttt{F.Alamrs}\\) (truth=0, pred=1), the prediction and the ground-truth are rescaled to the range \\(0-255\\) and binarized at thresholds \\([16,74,133,160,181,219]\\). We also follow [41] to report the CSI at pooling scale \\(4\\times 4\\) and \\(16\\times 16\\), which evaluate the performance on neighborhood aggregations at multiple spatial scales. These pooled CSI metrics assess the models' ability to capture local pattern distributions. Additionally, we incorporate FVD [51] and continuous ranked probability score Figure 3: A set of example forecasts from baselines and PreDiff on the SEVIR test set. From top to bottom: context sequence \\(y\\), target sequence \\(x\\), forecasts from ConvLSTM [47], Earthformer [8], VideoGPT[65], LDM [42], PreDiff. (CRPS) [9] for assessing the visual quality and uncertainty modeling capabilities of the investigated methods. CRPS measures the discrepancy between the predicted distribution and the true distribution. When the predicted distribution collapses into a single value, as in deterministic models, CRPS reduces to Mean Absolute Error (MAE). A lower CRPS value indicates higher forecast accuracy. Scores for all involved metrics are calculated using an ensemble of eight samples from each model. #### 3.2.1 Comparison to the State of the Art We adjust the configurations of involved baselines accordingly and tune some of the hyperparameters for adaptation on the SEVIR dataset. More implementation details of baselines are provided in Appendix B.2. The experiment results listed in Table 2 show that probabilistic spatiotemporal forecasting methods are not good at achieving high CSI scores. However, they are more powerful at capturing the patterns and the true distribution of the data, hence achieving much better FVD scores and CSI-pool16. Qualitative results shown in Fig. 3 demonstrate that CSI is not aligned with human perceptual judgement. For such a complex system, deterministic methods give up capturing the real patterns and resort to averaging the possible futures, i.e. blurry predictions, to keep the scores from appearing too inaccurate. Probabilistic approaches, of which PreDiff is the best, though are not favored by per-pixel metrics, perform better at capturing the data distribution within a local area, resulting in higher CSI-pool16, lower CRPS, and succeed in keeping the correct local patterns, which can be crucial for recognizing weather events. More detailed quantitative results on SEVIR are provided in Appendix D. #### 3.2.2 Knowledge Alignment: Anticipated Average Intensity Earth system observation data, such as the Vertically Integrated Liquid (VIL) data in SEVIR, are usually not physically complete, posing challenges for directly incorporating physical laws for guidance. However, with highly flexible knowledge alignment mechanism, we can still utilize auxiliary prior knowledge to guide the forecasting effectively. Specifically for precipitation nowcasting on SEVIR, we use anticipated precipitation intensity to align the generations to simulate possible extreme weather events. We denote the average intensity of a data sequence as \\(I(x)\\in\\mathbb{R}^{+}\\). In order to estimate the conditional quantiles of future intensity, we train a simple probabilistic time series forecasting model with a parametric (Gaussian) distribution \\(p_{\\tau}(I(x)|[I(y^{j})])=\\mathcal{N}(\\mu_{\\tau}([I(y^{j})]),\\sigma_{\\tau}([I(y ^{j})]))\\) that predict Figure 4: A set of example forecasts from PreDiff-KA, i.e., PreDiff under the guidance of anticipated average intensity. From top to bottom: context sequence \\(y\\), target sequence \\(x\\), forecasts from PreDiff and PreDiff-KA showcasing different levels of anticipated future intensity (\\(\\mu_{\\tau}+n\\sigma_{\\tau}\\)), where \\(n\\) takes the values of \\(4,2,-2,-4\\). the distribution of the average future intensity \\(I(x)\\) given the average intensity of each context frame \\([I(y^{j})]_{j=1}^{L_{n}}\\) (abbreviated as \\([I(y^{j})]\\)). By incorporating \\(\\mathcal{F}(\\widehat{x},y)\\equiv I(\\widehat{x})\\) and \\(\\mathcal{F}_{0}(y)\\equiv\\mu_{\\tau}+n\\sigma_{\\tau}\\) for knowledge alignment, PreDiff-KA gains the capability of generating forecasts for potential extreme cases, e.g., where \\(I(\\widetilde{x})\\) falls outside the typical range of \\(\\mu_{\\tau}\\pm\\sigma_{\\tau}\\). Fig. 4 shows a set of generations from PreDiff and PreDiff-KA with anticipated future intensity \\(\\mu_{\\tau}+n\\sigma_{\\tau}\\), \\(n\\in\\{-4,-2,2,4\\}\\). This qualitative example demonstrates that PreDiff is not only capable of capturing the distribution of the future, but also flexible at highlighting possible extreme cases like rainstorms and droughts with the knowledge alignment mechanism, which is crucial for decision-making and precaution. According to Table 2, the FVD score of PreDiff-KA (\\(34.18\\)) is only slightly worse than the FVD score of PreDiff (\\(33.05\\)). This indicates that knowledge alignment effectively aligns the generations with prior knowledge while maintaining fidelity and adherence to the true data distribution. ## 4 Conclusions and Broader Impacts In this paper, we propose PreDiff, a novel latent diffusion model for precipitation nowcasting. We also introduce a general two-stage pipeline for training DL models for Earth system forecasting. Specifically, we develop knowledge alignment mechanism that is capable of guiding PreDiff to generate forecasts in compliance with domain-specific prior knowledge. Experiments demonstrate that our method achieves state-of-the-art performance on \\(N\\)-body MNIST and SEVIR datasets. Our work has certain limitations: 1) Benchmark datasets and evaluation metrics for precipitation nowcasting and Earth system forecasting are still maturing compared to the computer vision domain. While we utilize conventional precipitation forecasting metrics and visual quality evaluation, aligning these assessments with expert judgement remains an open challenge. 2) Effective integration of physical principles and domain knowledge into DL models for precipitation nowcasting remains an active research area. Close collaboration between DL researchers and domain experts in meteorology and climatology will be key to developing hybrid models that effectively leverage both data-driven learning and scientific theory. 3) While Earth system observation data have grown substantially in recent years, high-quality data remain scarce in many domains. This scarcity can limit PreDiff's ability to accurately capture the true distribution, occasionally resulting in unrealistic forecast hallucinations under the guidance of prior knowledge as it attempts to circumvent the knowledge alignment mechanism. Further research on enhancing the sample efficiency of PreDiff and the knowledge alignment mechanism is needed. In conclusion, PreDiff represents a promising advance in knowledge-aligned DL for Earth system forecasting, but work remains to improve benchmarking, incorporate scientific knowledge, and boost model robustness through collaborative research between AI and domain experts. \\begin{table} \\begin{tabular}{l|c|c c c c c} \\hline \\hline Model & \\#Param. (M) & \\multicolumn{5}{c}{Metrics} \\\\ & & \\multicolumn{1}{c}{FVD \\(\\downarrow\\)} & \\multicolumn{1}{c}{CRPS \\(\\downarrow\\)} & \\multicolumn{1}{c}{CSI \\(\\uparrow\\)} & \\multicolumn{1}{c}{CSI-pool\\(\\uparrow\\)} & \\multicolumn{1}{c}{CSI-pool\\(\\uparrow\\)} \\\\ \\hline \\hline Persistence & - & 525.2 & 0.0526 & 0.2613 & 0.3702 & 0.4690 \\\\ \\hline UNet [55] & 16.6 & 753.6 & 0.0353 & 0.3593 & 0.4098 & 0.4805 \\\\ ConvLSTM [47] & 14.0 & 659.7 & 0.0332 & 0.4185 & 0.4452 & 0.5135 \\\\ PredMesh [61] & 46.6 & 663.5 & 0.0306 & 0.4080 & 0.4497 & 0.5005 \\\\ PhyDNet [11] & 13.7 & 723.2 & 0.0319 & 0.3940 & 0.4379 & 0.4854 \\\\ E3D-LSTM [60] & 35.6 & 600.1 & 0.0297 & 0.4038 & 0.4492 & 0.4961 \\\\ Rainformer [1] & 184.0 & 760.5 & 0.0357 & 0.3661 & 0.4232 & 0.4738 \\\\ Earthformer [8] & 15.1 & 690.7 & 0.0304 & **0.4419** & 0.4562 & 0.5005 \\\\ \\hline DGMR [41] & 71.5 & 485.2 & 0.0435 & 0.2675 & 0.3431 & 0.4832 \\\\ VideoGPT [65] & 99.6 & 261.6 & 0.0381 & 0.3653 & 0.4349 & 0.5798 \\\\ LDM [42] & 438.6 & 133.0 & 0.0280 & 0.3580 & 0.4022 & 0.5522 \\\\ \\hline PreDiff & 220.5 & **33.05** & **0.0246** & 0.4100 & **0.4624** & **0.6244** \\\\ PreDiff-KA (\\(\\in[-2\\sigma_{\\tau},2\\sigma_{\\tau}]\\)) & 229.4 & 34.18 & - & - & - & - \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Performance comparison on SEVIR. The Critical Success Index, also known as the intersection over union (IoU), is calculated at different precipitation thresholds and denoted as \\(\\texttt{CSI-}thresh\\). \\(\\texttt{CSI}\\) reports the mean of \\(\\texttt{CSI-}[16,74,133,160,181,219]\\). \\(\\texttt{CSI-pool}s\\) with \\(s=4\\) and \\(s=16\\) report the \\(\\texttt{CSI}\\) at pooling scales of \\(4\\times 4\\) and \\(16\\times 16\\). Besides, we include the continuous ranked probability score (CRPS) for probabilistic forecast assessment, and the scores of Frechet Video Distance (FVD) for evaluating visual quality. ## References * [1] Cong Bai, Feng Sun, Jinglin Zhang, Yi Song, and Shengyong Chen. Rainformer: Features extraction balanced network for radar-based precipitation nowcasting. _IEEE Geoscience and Remote Sensing Letters_, 19:1-5, 2022. * [2] Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu, and Qi Tian. Accurate medium-range global weather forecasting with 3d neural networks. _Nature_, pages 1-6, 2023. * [3] Kang Chen, Tao Han, Junchao Gong, Lei Bai, Fenghua Ling, Jing-Jia Luo, Xi Chen, Leiming Ma, Tianning Zhang, Rui Su, et al. Fengwu: Pushing the skillful global medium-range weather forecast beyond 10 days lead. _arXiv preprint arXiv:2304.02948_, 2023. * [4] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. _Advances in Neural Information Processing Systems_, 34:8780-8794, 2021. * [5] Lasse Espeholt, Shreya Agrawal, Casper Sonderby, Manoj Kumar, Jonathan Heek, Carla Bromberg, Cenk Gazen, Jason Hickey, Aaron Bell, and Nal Kalchbrenner. Skillful twelve hour precipitation forecasts using large context neural networks. _arXiv preprint arXiv:2111.07470_, 2021. * [6] Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Grasskog, and Anastasis Germanidis. Structure and content-guided video synthesis with diffusion models. _arXiv preprint arXiv:2302.03011_, 2023. * [7] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 12873-12883, 2021. * [8] Zhihan Gao, Xingjian Shi, Hao Wang, Yi Zhu, Yuyang Wang, Mu Li, and Dit-Yan Yeung. Earthformer: Exploring space-time transformers for earth system forecasting. In _NeurIPS_, 2022. * [9] Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. _Journal of the American statistical Association_, 102(477):359-378, 2007. * [10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. _Advances in neural information processing systems_, 27, 2014. * [11] Vincent Le Guen and Nicolas Thome. Disentangling physical dynamics from unknown factors for unsupervised video prediction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 11474-11484, 2020. * [12] John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catanzaro. Adaptive fourier neural operators: Efficient token mixers for transformers. _arXiv preprint arXiv:2111.13587_, 2021. * [13] Shantanu Gupta, Hao Wang, Zachary Lipton, and Yuyang Wang. Correcting exposure bias for link recommendation. In _ICML_, 2021. * [14] Derek Hansen, Danielle C. Maddix, Shima Alizadeh, Gaurav Gupta, and Michael W. Mahoney. Learning physical models that can respect conservation laws. In _Proceedings of the \\(40^{th}\\) of International Conference on Machine Learning_, volume 202, 2023. * [15] William Harvey, Saeid Naderiparizi, Vaden Masrani, Christian Weilbach, and Frank Wood. Flexible diffusion modeling of long videos. _arXiv preprint arXiv:2205.11495_, 2022. * [16] Yusuke Hatanaka, Yannik Glaser, Geoff Galgon, Giuseppe Torri, and Peter Sadowski. Diffusion models for high-resolution solar forecasts. _arXiv preprint arXiv:2302.00170_, 2023. * [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016. * [18] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). _arXiv preprint arXiv:1606.08415_, 2016. * [19] Hans Hersbach, Bill Bell, Paul Berrisford, Shoji Hirahara, Andras Horanyi, Joaquin Munoz-Sabater, Julien Nicolas, Carole Peubey, Raluca Radu, Dinand Schepers, et al. The era5 global reanalysis. _Quarterly Journal of the Royal Meteorological Society_, 146(730):1999-2049, 2020. * [20] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. _Advances in neural information processing systems_, 30, 2017. * [21] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. _arXiv preprint arXiv:1207.0580_, 2012. * [22] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. _Advances in Neural Information Processing Systems_, 33:6840-6851, 2020. * [23] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. _arXiv preprint arXiv:2204.03458_, 2022. * [24] Lianghua Huang, Di Chen, Yu Liu, Yujun Shen, Deli Zhao, and Jingren Zhou. Composer: Creative and controllable image synthesis with composable conditions. _arXiv preprint arXiv:2302.09778_, 2023. * [25] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In _International conference on machine learning_, pages 448-456. pmlr, 2015. * [26] Kevin Kilgour, Mauricio Zuluaga, Dominik Roblek, and Matthew Sharifi. Fr\\(\\backslash\\)'echet audio distance: A metric for evaluating music enhancement algorithms. _arXiv preprint arXiv:1812.08466_, 2018. * [27] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014. * [28] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_, 2013. * [29] Remi Lam, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Alexander Pritzel, Suman Ravuri, Timo Ewalds, Ferran Alet, Zach Eaton-Rosen, et al. Graphcast: Learning skillful medium-range global weather forecasting. _arXiv preprint arXiv:2212.12794_, 2022. * [30] Jussi Leinonen, Ulrich Hamann, Daniele Nerini, Urs Germann, and Gabriele Franch. Latent diffusion models for generative precipitation nowcasting with accurate uncertainty quantification. _arXiv preprint arXiv:2304.12891_, 2023. * [31] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. _arXiv preprint arXiv:1711.05101_, 2017. * [32] Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, and Tieniu Tan. Videofusion: Decomposed diffusion models for high-quality video generation. _arXiv e-prints_, pages arXiv-2303, 2023. * [33] Francois Maze and Faez Ahmed. Topodiff: A performance and constraint-guided diffusion model for topology optimization. _arXiv preprint arXiv:2208.09591_, 2022. * [34] Lu Mi, Hao Wang, Yonglong Tian, and Nir Shavit. Training-free uncertainty estimation for neural networks. In _AAAI_, 2022. * [35] Valtonen MJ, Mauri Valtonen, and Hannu Karttunen. _The three-body problem_. Cambridge University Press, 2006. * [36] Haomiao Ni, Changhao Shi, Kai Li, Sharon X. Huang, and Martin Renqiang Min. Conditional image-to-video generation with latent flow diffusion models, 2023. * [37] Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. FourCastNet: A global data-driven high-resolution weather model using adaptive fourier neural operators. _arXiv preprint arXiv:2202.11214_, 2022. * [38] Kristina Preuer, Philipp Renz, Thomas Unterthiner, Sepp Hochreiter, and Gunter Klambauer. Frechet chemnet distance: a metric for generative models for molecules in drug discovery. _Journal of chemical information and modeling_, 58(9):1736-1741, 2018. * [39] Ruslan Rakhimov, Denis Volkhonskiy, Alexey Artemov, Denis Zorin, and Evgeny Burnaev. Latent video transformer. _arXiv preprint arXiv:2006.10704_, 2020. * [40] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_, 2022. * [41] Suman Ravuri, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan Fitzsimons, Maria Athanassiadou, Sheleem Kashem, Sam Madge, et al. Skiful precipitation nowcasting using deep generative models of radar. _Nature_, 597(7878):672-677, 2021. * [42] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10684-10695, 2022. * [43] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 22500-22510, 2023. * [44] Nadim Saad, Gaurav Gupta, Shima Alizadeh, and Danielle C. Maddix. Guiding continuous operator learning through physics-based boundary constraints. In _Proceedings of the \\(11^{th}\\) International Conference on Learning Representations_, 2023. * [45] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. _arXiv preprint arXiv:2205.11487_, 2022. * [46] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. _arXiv preprint arXiv:1701.05517_, 2017. * [47] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In _NeurIPS_, volume 28, 2015. * [48] Xingjian Shi, Zhihan Gao, Leonard Lausen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, and Wang-chun Woo. Deep learning for precipitation nowcasting: A benchmark and a new model. In _NeurIPS_, volume 30, 2017. * [49] Casper Kaae Sonderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver, Tim Salimans, Shreya Agrawal, Jason Hickey, and Nal Kalchbrenner. Metnet: A neural weather model for precipitation forecasting. _arXiv preprint arXiv:2003.12140_, 2020. * [50] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video representations using LSTMs. In _ICML_, pages 843-852. PMLR, 2015. * [51] Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Fvd: A new metric for video generation. In _DGS@ICLR_, 2019. * [52] Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. In _Neural Information Processing Systems (NeurIPS)_, 2021. * [53] Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. _Advances in neural information processing systems_, 29, 2016. * [54] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _NeurIPS_, volume 30, 2017. * [55] Mark Veillette, Siddharth Samsi, and Chris Mattioli. SEVIR: A storm event imagery dataset for deep learning applications in radar and satellite meteorology. _Advances in Neural Information Processing Systems_, 33:22009-22019, 2020. * [56] Vikram Voleti, Alexia Jolicoeur-Martineau, and Christopher Pal. Masked conditional video diffusion for prediction, generation, and interpolation. _arXiv preprint arXiv:2205.09853_, 2022. * [57] Hao Wang, SHI Xingjian, and Dit-Yan Yeung. Natural-parameter networks: A class of probabilistic neural networks. In _NIPS_, pages 118-126, 2016. * [58] Hao Wang and Dit-Yan Yeung. Towards bayesian deep learning: A framework and some existing methods. _TDKE_, 28(12):3395-3408, 2016. * [59] Hao Wang and Dit-Yan Yeung. A survey on bayesian deep learning. _CSUR_, 53(5):1-37, 2020. * [60] Yunbo Wang, Lu Jiang, Ming-Hsuan Yang, Li-Jia Li, Mingsheng Long, and Li Fei-Fei. Eidetic 3D LSTM: A model for video prediction and beyond. In _International conference on learning representations_, 2018. * [61] Yunbo Wang, Haixu Wu, Jianjin Zhang, Zhifeng Gao, Jianmin Wang, Philip Yu, and Mingsheng Long. PredRNN: A recurrent neural network for spatiotemporal predictive learning. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2022. * [62] Ziyan Wang and Hao Wang. Variational imbalanced regression: Fair uncertainty quantification via probabilistic smoothing. In _NeurIPS_, 2023. * [63] Dirk Weissenborn, Oscar Tackstrom, and Jakob Uszkoreit. Scaling autoregressive video models. In _International Conference on Learning Representations_, 2019. * [64] Yuxin Wu and Kaiming He. Group normalization. In _Proceedings of the European conference on computer vision (ECCV)_, pages 3-19, 2018. * [65] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. VideoGPT: Video generation using vq-vae and transformers. _arXiv preprint arXiv:2104.10157_, 2021. * [66] Sihyun Yu, Kihyuk Sohn, Subin Kim, and Jinwoo Shin. Video probabilistic diffusion models in projected latent space. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2023. * [67] Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, and Jan Kautz. Physdiff: Physics-guided human motion diffusion model. _arXiv preprint arXiv:2212.02500_, 2022. * [68] Lvmin Zhang and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. _arXiv preprint arXiv:2302.05543_, 2023. * [69] Lu Zhou and Rong-Hua Zhang. A self-attention-based neural network for three-dimensional multivariate modeling and its skillful enso predictions. _Science Advances_, 9(10):eadf2827, 2023. Related Work Deep learning for precipitation nowcastingIn recent years, the field of DL has experienced remarkable advancements, revolutionizing various domains of study, including Earth science. One area where DL has particularly made significant strides is in the field of Earth system forecasting, especially precipitation nowcasting. Precipitation nowcasting benefits from the success of DL architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Transformers, which have demonstrated their effectiveness in handling spatiotemporal tensors, the typical formulation for Earth system observation data. ConvLSTM [47], a pioneering approach in DL for precipitation nowcasting, combines the strengths of CNNs and LSTMs processing spatial and temporal data. PredRNN [61] builds upon ConvLSTM by incorporating a spatiotemporal memory flow structure. E3D-LSTM [60] integrates 3D CNN to LSTM to enhance long-term high-level relation modeling. PhyDNet [11] incorporated partial differential equation (PDE) constraints in the latent space. MetNet [49] and its successor, MetNet-2 [5], propose architectures based on ConvLSTM and dilated CNN, enabling skillful precipitation forecasts up to twelve hours ahead. DGMR [41] takes an adversarial training approach to generate sharp and accurate nowcasts, addressing the issue of blurry predictions. In addition to precipitation nowcasting, there has been a surge in the modeling of global weather and medium-range weather forecasting due to the availability of extensive Earth observation data, such as the European Centre for Medium-Range Weather Forecasts (ECMWF)'s ERA5 [19] dataset. Several DL-based models have emerged in this area. FourCastNet [37] proposes an architecture with Adaptive Fourier Neural Operators (AFNO) [12] as building blocks for autoregressive weather forecasting. FengWu [3] introduces a multi-model Transformer-based global medium-range weather forecast model that achieves skillful forecasts up to ten days ahead. GraphCast [29] combines graph neural networks with convolutional LSTMs to tackle sub-seasonal forecasting tasks, representing weather phenomena as spatiotemporal graphs. Pangu-Weather [2] proposes a 3D Transformer model with Earth-specific priors and a hierarchical temporal aggregation strategy for medium-range global weather forecasting. While recent years have seen remarkable progress in DL for precipitation nowcasting, existing methods still face some limitations. Some methods are deterministic, failing to capture uncertainty and resulting in blurry generation. Others lack the capability of incorporating prior knowledge, which is crucial for machine learning for science. In contrast, PreDiff captures the uncertainty in the underlying data distribution via diffusion models, avoiding simply averaging all possibilities into blurry forecasts. Our knowledge alignment mechanism facilitates post-training alignment with physical principles and domain-specific prior knowledge. Diffusion modelsDiffusion models (DMs) [22] are a class of generative models that have become increasingly popular in recent years. DMs learn the data distribution by constructing a forward process that adds noise to the data, and then approximating the reverse process to remove the noise. Latent diffusion models (LDMs) [42] are a variant of DMs that are trained on latent vector outputs from a variational autoencoder. LDMs have been shown to be more efficient in both training and inference compared to original DMs. Building on the success of DMs in image generation, DMs have also been adopted for video generation. MCVD [56] trains a DM by randomly masking past and/or future frames in blocks and conditioning on the remaining frames. It generates long videos by autoregressively sampling blocks of frames in a sliding window manner. PVDM [66] projects videos into low-dimensional latent space as 2D vectors, and presents a joint training of unconditional and frame conditional video generations. LFDM [36] employs a flow predictor to estimate latent flows between video frames and learns a DM for temporal latent flow generation. VideoFusion [32] decomposes the transition noise in DMs into per-frame noise and the noise along time axis, and trains two networks jointly to match the noise decomposition. While DMs have demonstrated impressive performance in video synthesis, its applications to precipitation nowcasting and other Earth science tasks have not been well explored. Hatanaka et al. [16] uses DMs to super-resolve coarse numerical predictions for solar forecast. Concurrent to our work, LDCast [30] applies LDMs for precipitation nowcasting. However, LDCast has not studied how to integrate prior knowledge to the DM, which is a unique advantage and novelty of PreDiff. Conditional controls on diffusion modelsAnother key advantage of DMs is the ability to condition generation on text, class labels, and other modalities for controllable and diverse output. For instance, ControlNet [68] enables fine-tuning a pretrained DM by freezing the base model and training a copy end-to-end with conditional inputs. Composer [24] decomposes images into representative factors used as conditions to guide the generation. Beyond text and class labels, conditions in other modalities, including physical constraints, can also be leveraged to provide valuable guidance. TopDiff [33] constrains topology optimization using load, boundary conditions, and volume fraction. Physdiff [67] trains a physics-based motion projection module with reinforcement learning to project denoised motions in diffusion steps into physically plausible ones. Nonetheless, while conditional control has proven to be a powerful technique in various domains, its application in DL for precipitation nowcasting remains an unexplored area. Implementation Details All experiments are conducted on machines with NVIDIA A10G GPUs (24GB memoery). All models, including PreDiff, knowledge alignment networks and the baselines, can fit in a single GPU without the need for gradient checkpointing or model parallelization. ### PreDiff Frame-wise autoencoderWe follow [7; 42] to build frame-wise VAEs (not VQVAEs) and train them adversarially from scratch on \\(N\\)-body MNIST and SEVIR frames. As shown in Sec. 2.2, on \\(N\\)-body MNIST dataset, the spatial downsampling ratio is \\(4\\times 4\\). A frame \\(x^{j}\\in\\mathbb{R}^{64\\times 64\\times 1}\\) is encoded to \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 3}\\) by parameterizing \\(p_{\\mathcal{E}}(z^{j}|x^{j})=\\mathcal{N}(\\mu_{\\mathcal{E}}(x^{j})|\\sigma_{ \\mathcal{E}}(x^{j}))\\). On SEVIR dataset, the spatial downsampling ratio is \\(8\\times 8\\). A frame \\(x^{j}\\in\\mathbb{R}^{128\\times 128\\times 1}\\) is encoded to \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 4}\\) similarly. The detailed configurations of the encoder and decoder of the VAE on \\(N\\)-body MNIST are shown in Table 3 and Table 4. The detailed configurations of the encoder and decoder of the VAE on SEVIR are shown in Table 5 and Table 6. The discriminators for adversarial training on \\(N\\)-body MNIST and SEVIR datasets share the same configurations, which are shown in Table 7. Latent diffusion model that instantiates \\(p_{\\theta}(z_{t-1}|z_{t},z_{\\text{cond}})\\)Stemming from Earthformer [8], we build _Earthformer-UNet_, which is a hierarchical UNet with self cuboid attention [8] layers as basic building blocks, as shown in Fig. 5. On \\(N\\)-body MNIST, it takes the concatenation along the temporal dimension (the sequence length axis) of \\(z_{\\text{cond}}\\in\\mathbb{R}^{10\\times 16\\times 16\\times 3}\\) and \\(z_{t}\\in\\mathbb{R}^{10\\times 16\\times 3}\\) as input, and outputs \\(z_{t-1}\\in\\mathbb{R}^{10\\times 16\\times 3}\\). On SEVIR, it takes the concatenation along the temporal dimension (the sequence length axis) of \\(z_{\\text{cond}}\\in\\mathbb{R}^{7\\times 16\\times 16\\times 4}\\) and \\(z_{t}\\in\\mathbb{R}^{6\\times 16\\times 16\\times 4}\\) as input, and outputs \\(z_{t-1}\\in\\mathbb{R}^{6\\times 16\\times 16\\times 4}\\). Besides, we add the embedding of the denoising step \\(t\\) to the state in front of each cuboid attention block via an embeding layer TEmbed, following [22]. The detailed configurations of the Earthformer-UNet is described in Table 8. Knowledge alignment networksA knowledge alignment network parameterizes \\(U_{\\phi}(z_{t},t,y)\\) to predict \\(\\mathcal{F}(\\widehat{x},y)\\) using the noisy latent \\(z_{t}\\). In practice, we build an Earthformer encoder [8] with a final pooling block as the knowledge alignment network to parameterize \\(U_{\\phi}(z_{t},t,z_{\\text{cond}})\\), which takes \\(t\\), and the concatenation of \\(z_{\\text{cond}}\\) and \\(z_{t}\\), instead of \\(t\\), \\(y\\) and \\(z_{t}\\) as the inputs. We find this implementation accurate enough when \\(t\\) is small. The detailed configurations of the knowledge alignment network is described in Table 9 OptimizationWe train the frame-wise VAEs using the Adam optimizer [27] following [7]. We train the latent Earthformer-UNet and the knowledge alignment network using the AdamW optimizer [31] following [8]. Detailed configurations are shown in Table 10, Table 11 and Table 12 for the frame-wise VAE, the latent Earthformer-UNet and the knowledge alignment network, respectively. We adopt data parallel and gradient accumulation to use a larger total batch size while the GPU can only afford a smaller micro batch size. Figure 5: **Earthformer-UNet architecture.** PreDiff employs an Earthformer-UNet as the backbone for parameterizing the latent diffusion model \\(p_{\\theta}(z_{t}|z_{t+1},z_{\\text{cond}})\\). It takes the concatenation of the latent context \\(z_{\\text{cond}}\\) (in the blue border) and the previous-step noisy latent future \\(z_{t+1}\\) (in the cyan border) along the temporal dimension (the sequence length axis) as input, and outputs \\(z_{t}\\). (Best viewed in color). \\begin{table} \\begin{tabular}{l|l|c|c} \\hline \\hline Block & Layer & Resolution & Channels \\\\ \\hline \\hline Input \\(x^{j}\\) & - & \\(64\\times 64\\) & \\(1\\) \\\\ \\hline 2D CNN & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(1\\to 128\\) \\\\ \\hline \\multirow{6}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(64\\times 64\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(128\\) \\\\ & GroupNorm32 & \\(64\\times 64\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(128\\) \\\\ & SiLU & \\(64\\times 64\\) & \\(128\\) \\\\ \\hline Downsampler & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\to 32\\times 32\\) & \\(128\\) \\\\ \\hline \\multirow{6}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(32\\times 32\\) & \\(128\\to 256,256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\) & \\(256\\) \\\\ & SiLU & \\(32\\times 32\\) & \\(256\\) \\\\ \\hline Downsampler & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\to 16\\times 16\\) & \\(256\\) \\\\ \\hline \\multirow{6}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(256\\to 512,512\\) \\\\ & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{6}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Linear & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{6}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{6}{*}{Output Block} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\cline{1-1} & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\to 6\\) \\\\ \\cline{1-1} & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(6\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: The details of the encoder of the frame-wise VAE on \\(N\\)-body MNIST frames. It encodes an input frame \\(x^{j}\\in\\mathbb{R}^{64\\times 64\\times 1}\\) into a latent \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 3}\\). Conv3 \\(\\times\\) 3 is the 2D convolutional layer with \\(3\\times 3\\) kernel. GroupNorm32 is the Group Normalization (GN) layer [64] with \\(32\\) groups. SiLU is the Sigmoid Linear Unit activation layer [18] with function SiLU\\((x)=x\\cdot\\texttt{sigmoid}(x)\\). The Attention is the self attention layer [54] that first maps the input to queries \\(Q\\), keys \\(K\\) and values \\(V\\) by three Linear layers, and then does self attention operation: Attention\\((x)=\\texttt{Softmax}(QK^{T}/\\sqrt{C})V)\\). \\begin{table} \\begin{tabular}{l|l|c|c} \\hline \\hline Block & Layer & Resolution & Channels \\\\ \\hline \\hline Input \\(z^{j}\\) & - & \\(16\\times 16\\) & \\(3\\) \\\\ \\hline 2D CNN & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(3\\) \\\\ & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(3\\to 512\\) \\\\ \\hline \\multirow{3}{*}{Self Attention Block} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Attention & \\(16\\times 16\\) & \\(512\\) \\\\ & Linear & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(512\\) \\\\ & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline Upsampler & Conv\\(3\\times 3\\) & \\(16\\times 16\\to 32\\times 32\\) & \\(512\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(32\\times 32\\) & \\(512\\to 256,256,256\\) \\\\ & Conv\\(3\\times 3\\) & \\(32\\times 32\\) & \\(256\\) \\\\ & Conv\\(3\\times 3\\) & \\(32\\times 32\\) & \\(256\\) \\\\ & SiLU & \\(32\\times 32\\) & \\(256\\) \\\\ \\hline Upsampler & Conv\\(3\\times 3\\) & \\(32\\times 32\\to 64\\times 64\\) & \\(256\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(64\\times 64\\) & \\(256\\) \\\\ & Conv\\(3\\times 3\\) & \\(64\\times 64\\) & \\(256\\to 128,128,128\\) \\\\ & GroupNorm32 & \\(64\\times 64\\) & \\(128\\) \\\\ & Conv\\(3\\times 3\\) & \\(64\\times 64\\) & \\(128\\) \\\\ & SiLU & \\(64\\times 64\\) & \\(128\\) \\\\ \\hline \\multirow{3}{*}{Output Block} & GroupNorm32 & \\(64\\times 64\\) & \\(128\\) \\\\ & SiLU & \\(64\\times 64\\) & \\(128\\) \\\\ & Conv\\(3\\times 3\\) & \\(64\\times 64\\) & \\(128\\to 1\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: The details of the decoder of the frame-wise VAE on \\(N\\)-body MNIST frames. It decodes a latent \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 3}\\) back to a frame in pixel space \\(x^{j}\\in\\mathbb{R}^{64\\times 64\\times 1}\\). Conv\\(3\\times 3\\) is the 2D convolutional layer with \\(3\\times 3\\) kernel. GroupNorm32 is the Group Normalization (GN) layer [64] with \\(32\\) groups. SiLU is the Sigmoid Linear Unit activation layer [18] with function \\(\\texttt{SiLU}(x)=x\\cdot\\texttt{sigmoid}(x)\\). The Attention is the self attention layer [54] that first maps the input to queries \\(Q\\), keys \\(K\\) and values \\(V\\) by three Linear layers, and then does self attention operation: \\(\\texttt{Attention}(x)=\\texttt{Softmax}(QK^{T}/\\sqrt{C})V)\\). \\begin{table} \\begin{tabular}{l|l|c|c} \\hline \\hline Block & Layer & Resolution & Channels \\\\ \\hline \\hline Input \\(x^{j}\\) & - & \\(128\\times 128\\) & \\(1\\) \\\\ \\hline 2D CNN & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\) & \\(1\\to 128\\) \\\\ \\hline \\multirow{4}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(128\\times 128\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\) & \\(128\\) \\\\ & GroupNorm32 & \\(128\\times 128\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\) & \\(128\\) \\\\ & SiLU & \\(128\\times 128\\) & \\(128\\) \\\\ \\hline Downsampler & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\to 64\\times 64\\) & \\(128\\) \\\\ \\hline \\multirow{4}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(64\\times 64\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(128\\to 256,256\\) \\\\ & GroupNorm32 & \\(64\\times 64\\) & \\(256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(256\\) \\\\ & SiLU & \\(64\\times 64\\) & \\(256\\) \\\\ \\hline Downsampler & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\to 32\\times 32\\) & \\(256\\) \\\\ \\hline \\multirow{4}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(32\\times 32\\) & \\(256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\) & \\(256\\to 512,512\\) \\\\ & GroupNorm32 & \\(32\\times 32\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\) & \\(512\\) \\\\ & SiLU & \\(32\\times 32\\) & \\(512\\) \\\\ \\hline Downsampler & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\to 16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{4}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{4}{*}{Self Attention Block} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Attention & \\(16\\times 16\\) & \\(512\\) \\\\ & Linear & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{4}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{4}{*}{Output Block} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\to 8\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: The details of the encoder of the frame-wise VAE on SEVIR frames. It encodes an input frame \\(x^{j}\\in\\mathbb{R}^{128\\times 128\\times 1}\\) into a latent \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 4}\\). \\(\\texttt{Conv3}\\times\\) 3 is the 2D convolutional layer with \\(3\\times 3\\) kernel. \\(\\texttt{GroupNorm32}\\) is the Group Normalization (GN) layer [64] with \\(32\\) groups. \\(\\texttt{SiLU}\\) is the Sigmoid Linear Unit activation layer [18] with function \\(\\texttt{SiLU}(x)=x\\cdot\\texttt{sigmoid}(x)\\). The \\(\\texttt{Attention}\\) is the self attention layer [54] that first maps the input to queries \\(Q\\), keys \\(K\\) and values \\(V\\) by three \\(\\texttt{Linear}\\) layers, and then does self attention operation: \\(\\texttt{Attention}(x)=\\texttt{Softmax}(QK^{T}/\\sqrt{C})V)\\). \\begin{table} \\begin{tabular}{l|l|c|c} \\hline \\hline Block & Layer & Resolution & Channels \\\\ \\hline \\hline Input \\(z^{j}\\) & - & \\(16\\times 16\\) & \\(4\\) \\\\ \\hline \\multirow{2}{*}{2D CNN} & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(4\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(4\\to 512\\) \\\\ \\hline \\multirow{3}{*}{Self Attention Block} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Attention & \\(16\\times 16\\) & \\(512\\) \\\\ & Linear & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline Upsampler & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\to 32\\times 32\\) & \\(512\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(32\\times 32\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\) & \\(512\\) \\\\ & GroupNorm32 & \\(32\\times 32\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\) & \\(512\\) \\\\ & SiLU & \\(32\\times 32\\) & \\(512\\) \\\\ \\hline Upsampler & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\to 64\\times 64\\) & \\(512\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(64\\times 64\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(512\\to 256,256,256\\) \\\\ & GroupNorm32 & \\(64\\times 64\\) & \\(256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(256\\) \\\\ & SiLU & \\(64\\times 64\\) & \\(256\\) \\\\ \\hline Upsampler & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\to 128\\times 128\\) & \\(256\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(128\\times 128\\) & \\(256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\) & \\(256\\to 128,128,128\\) \\\\ & GroupNorm32 & \\(128\\times 128\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\) & \\(128\\) \\\\ & SiLU & \\(128\\times 128\\) & \\(128\\) \\\\ \\hline \\multirow{3}{*}{Output Block} & GroupNorm32 & \\(128\\times 128\\) & \\(128\\) \\\\ & SiLU & \\(128\\times 128\\) & \\(128\\to 1\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: The details of the decoder of the frame-wise VAE on SEVIR frames. It decodes a latent \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 4}\\) back to a frame in pixel space \\(x^{j}\\in\\mathbb{R}^{128\\times 128\\times 1}\\). Conv3 \\(\\times\\) 3 is the 2D convolutional layer with \\(3\\times 3\\) kernel. GroupNorm32 is the Group Normalization (GN) layer [64] with \\(32\\) groups. SiLU is the Sigmoid Linear Unit activation layer [18] with function SiLU\\((x)=x\\cdot\\texttt{sigmoid}(x)\\). The Attention is the self attention layer [54] that first maps the input to queries \\(Q\\), keys \\(K\\) and values \\(V\\) by three Linear layers, and then does self attention operation: Attention\\((x)=\\texttt{Softmax}(QK^{T}/\\sqrt{C})V)\\). \\begin{table} \\begin{tabular}{l|l|c|c|c} \\hline \\hline \\multirow{2}{*}{Block} & \\multirow{2}{*}{Layer} & \\multicolumn{2}{c|}{Resolution} & \\multirow{2}{*}{Channels} \\\\ & & \\(N\\)-body MNIST & & \\\\ \\hline \\hline Input \\(x^{j}\\) & - & \\(64\\times 64\\) & \\(128\\times 128\\) & \\(1\\) \\\\ \\hline 2D CNN & Conv4 \\(\\times\\) 4 & \\(64\\times 64\\to 32\\times 32\\) & \\(128\\times 128\\to 64\\times 64\\) & \\(1\\to 64\\) \\\\ \\hline \\multirow{2}{*}{Downsampler} & LeakyReLU & \\(32\\times 32\\) & \\(64\\times 64\\) & \\(64\\) \\\\ & Conv4 \\(\\times\\) 4 & \\(32\\times 32\\to 16\\times 16\\) & \\(64\\times 64\\to 32\\times 32\\) & \\(64\\to 128\\) \\\\ & BatchNorm & \\(16\\times 16\\) & \\(32\\times 32\\) & \\(128\\) \\\\ \\hline \\multirow{2}{*}{Downsampler} & LeakyReLU & \\(16\\times 16\\) & \\(32\\times 32\\) & \\(128\\) \\\\ & Conv4 \\(\\times\\) 4 & \\(16\\times 16\\to 8\\times 8\\) & \\(32\\times 32\\to 16\\times 16\\) & \\(128\\to 256\\) \\\\ & BatchNorm & \\(8\\times 8\\) & \\(16\\times 16\\) & \\(256\\) \\\\ \\hline \\multirow{2}{*}{Downsampler} & LeakyReLU & \\(8\\times 8\\) & \\(16\\times 16\\) & \\(256\\) \\\\ & Conv4 \\(\\times\\) 4 & \\(8\\times 8\\to 7\\times 7\\) & \\(16\\times 16\\to 15\\times 15\\) & \\(256\\to 512\\) \\\\ & BatchNorm & \\(7\\times 7\\) & \\(15\\times 15\\) & \\(512\\) \\\\ \\hline \\multirow{2}{*}{Output Block} & LeakyReLU & \\(7\\times 7\\) & \\(15\\times 15\\) & \\(512\\) \\\\ & Conv4 \\(\\times\\) 4 & \\(7\\times 7\\to 6\\times 6\\) & \\(15\\times 15\\to 14\\times 14\\) & \\(1\\) \\\\ & AvgPool & \\(6\\times 6\\to 1\\) & \\(15\\times 15\\to 1\\) & \\(1\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 7: The details of the discriminator for the adversarial loss of on \\(N\\)-body MNIST and SEVIR frames. Conv4 \\(\\times\\) 4 is the 2D convolutional layer with \\(4\\times 4\\) kernel, \\(2\\times 2\\) or \\(1\\times 1\\) stride, and \\(1\\times 1\\) padding. BatchNorm is the Batch Normalization (BN) layer [25]. The negative slope in LeakyReLU is \\(0.2\\). \\begin{table} \\begin{tabular}{l|l|c|c|c} \\hline \\hline \\multirow{2}{*}{Block} & \\multirow{2}{*}{Layer} & \\multirow{2}{*}{Spatial Resolution} & \\multicolumn{2}{c}{Channels} \\\\ & & & \\(N\\)-body MNIST & SEVIR \\\\ \\hline \\hline Input \\([z_{\\text{cond}},z_{t}]\\) & - & \\(16\\times 16\\) & \\(3\\) & \\(4\\) \\\\ \\hline Observation Mask & ConcatMask & \\(16\\times 16\\) & \\(3\\to 4\\) & \\(4\\to 5\\) \\\\ \\hline \\multirow{9}{*}{Projector} & GroupNorm32 & \\(16\\times 16\\) & \\(4\\) & \\(5\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(4\\to 256\\) & \\(5\\to 256\\) \\\\ \\cline{1-1} & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(4\\to 256\\) & \\(5\\to 256\\) \\\\ \\cline{1-1} & GroupNorm32 & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & SiLU & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Dropout & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(256\\) & \\\\ \\hline Positional Embedding & PosEmbed & \\(16\\times 16\\) & \\(256\\) & \\\\ \\hline \\multirow{9}{*}{Cuboid Attention Block \\(\\times 4\\)} & TEmbed & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(\\(T,1,1\\)) & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(\\(1,\\text{H},1\\)) & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(\\(1,\\text{H},1\\)) & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(256\\) & \\\\ \\hline \\multirow{9}{*}{Cuboid Attention Block \\(\\times 8\\)} & PatchMerge & \\(16\\times 16\\to 8\\times 8\\) & \\(256\\to 1024\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & Linear & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\hline \\multirow{9}{*}{Cuboid Attention Block \\(\\times 8\\)} & TEmbed & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & Cuboid(\\(T,1,1\\)) & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & Cuboid(\\(1,\\text{H},1\\)) & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\hline \\hline \\multirow{9}{*}{Upsampler} & NearestNeighborInterp & \\(8\\times 8\\to 16\\times 16\\) & \\(1024\\) & \\\\ \\cline{1-1} & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(1024\\to 256\\) & \\\\ \\hline \\multirow{9}{*}{Cuboid Attention Block \\(\\times 4\\)} & TEmbed & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(\\(T,1,1\\)) & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(256\\to 3\\) & \\(256\\to 4\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 8: The details of the Earthformer-UNet as the latent diffusion backbone on \\(N\\)-body MNIST and SEVIR datasets. The ConcatMask layer for the Observation Mask block concatenates one more channel to the input to indicates whether the input is the encoded observation \\(z_{\\text{cond}}\\) or the noisy latent \\(z_{t}\\). \\(1\\) for \\(z_{\\text{cond}}\\) and \\(0\\) for \\(z_{t}\\). Conv\\(3\\times 3\\) is the 2D convolutional layer with \\(3\\times 3\\) kernel. GroupNorm32 is the Group Normalization (GN) layer [64] with \\(32\\) groups. If the number of the input data channels is smaller than \\(32\\), then the number of groups is set to the number of channels. SiLU is the Sigmoid Linear Unit activation layer [18] with function SiLU\\((x)=x\\cdot\\texttt{sigmaid}(x)\\). The negative slope in LeakyReLU is \\(0.1\\). Dropout is the dropout layer [21] with the probability \\(0.1\\) to drop an element to be zeroed. The FFN consists of two Linear layers separated by a GeLU activation layer [18]. PosEmbed is the positional embedding layer [54] that adds learned positional embeddings to the input. TEmbed is the embedding layer [22] that embeds the denoising step \\(t\\). PatchMerge splits a 2D input tensor with \\(C\\) channels into \\(N\\) non-overlapping \\(p\\times p\\) patches and merges the spatial dimensions into channels, gets \\(N\\)\\(1\\times 1\\) patches with \\(p^{2}\\cdot C\\) channels and concatenates them back along spatial dimensions. Residual connections [17] are added from blocks in the downsampling phase to corresponding blocks in the upsampling phase. \\begin{table} \\begin{tabular}{l|l|c|c|c} \\hline \\hline \\multirow{2}{*}{Block} & \\multirow{2}{*}{Layer} & \\multirow{2}{*}{Spatial Resolution} & \\multicolumn{2}{c}{Channels} \\\\ & & & \\(N\\)-body MNIST & SEVIR \\\\ \\hline \\hline Input \\([z_{\\text{cond}},z_{t}]\\) & - & \\(16\\times 16\\) & \\(3\\) & \\(4\\) \\\\ \\hline Observation Mask & ConcatMask & \\(16\\times 16\\) & \\(3\\to 4\\) & \\(4\\to 5\\) \\\\ \\hline \\multirow{7}{*}{Projector} & GroupNorm32 & \\(16\\times 16\\) & \\(4\\) & \\(5\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(4\\to 64\\) & \\(5\\to 64\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(4\\to 64\\) & \\(5\\to 64\\) \\\\ \\cline{1-1} & GroupNorm32 & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & SiLU & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & Dropout & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(64\\) & \\\\ \\hline Positional Embedding & PosEmbed & \\(16\\times 16\\) & \\(64\\) & \\\\ \\hline \\multirow{7}{*}{Cuboid Attention Block} & TE embed & \\(16\\times 16\\) & \\(64\\) & \\\\ & LayerNorm & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & Cuboid(T, \\(1,1\\)) & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} \\cline{2-5} & \\multicolumn{1}{c}{} & & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\cline{1-1} \\cline{2-5} & PatchMerge & \\(16\\times 16\\to 8\\times 8\\) & \\(64\\to 256\\) \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(256\\) & \\\\ \\hline \\multirow{7}{*}{Cuboid Attention Block} & TE embed & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(T, \\(1,1\\)) & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(\\(1,1,\\text{W}\\)) & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & \\multicolumn{1}{c}{} & & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(256\\) & \\\\ \\hline \\multirow{7}{*}{Output Pooling Block} & GroupNorm32 & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & Attention & \\(8\\times 8\\to 1\\) & \\(256\\) & \\\\ \\cline{1-1} & Linear & \\(1\\) & \\(256\\to 1\\) & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 9: The details of the Earthformer encoders for the parameterization of the knowledge alignment networks \\(U_{\\phi}(z_{t},t,z_{\\text{cond}})\\) on \\(N\\)-body MNIST and SEVIR datasets. The ConcatMask layer for the Observation Mask block concatenates one more channel to the input to indicates whether the input is the encoded observation \\(z_{\\text{cond}}\\) or the noisy latent \\(z_{t}\\). \\(1\\) for \\(z_{\\text{cond}}\\) and \\(0\\) for \\(z_{t}\\). \\(\\texttt{Conv3}\\times\\text{3}\\) is the 2D convolutional layer with \\(3\\times 3\\) kernel. GroupNorm32 is the Group Normalization (GN) layer [64] with \\(32\\) groups. If the number of the input data channels is smaller than \\(32\\), then the number of groups is set to the number of channels. SiLU is the Sigmoid Linear Unit activation layer [18] with function \\(\\texttt{SiLU}(x)=x\\cdot\\texttt{sigmoid}(x)\\). The negative slope in LeakyReLU is \\(0.1\\). Dropout is the dropout layer [21] with the probability \\(0.1\\) to drop an element to be zeroed. The FFN consists of two Linear layers separated by a GeLU activation layer [18]. PosEmbed is the positional embedding layer [54] that adds learned positional embeddings to the input. TE embed is the embedding layer [22] that embeds the denoising step \\(t\\). PatchMerge splits a 2D input tensor with \\(C\\) channels into \\(N\\) non-overlapping \\(p\\times p\\) patches and merges the spatial dimensions into channels, gets \\(N\\)\\(1\\times 1\\) patches with \\(p^{2}\\cdot C\\) channels and concatenates them back along spatial dimensions. Residual connections [17] are added from blocks in the downsampling phase to corresponding blocks in the upsampling phase. The Attention is the self attention layer [54] with an extra “cls” token for information aggregation. It first flattens the input and concatenates it with the “cls” token. Then it maps the concatenated input to queries \\(Q\\), keys \\(K\\) and values \\(V\\) by three Linear layers, and then does self attention operation: \\(\\texttt{Attention}(x)=\\texttt{Softmax}(QK^{T}/\\sqrt{C})V\\). Finally, the value of the “cls” token after self attention operation serves as the layer’s output. \\begin{table} \\begin{tabular}{l|c} \\hline \\hline Hyper-parameter of VAE & Value \\\\ \\hline Learning rate & \\(4.5\\times 10^{-6}\\) \\\\ \\(\\beta_{1}\\) & \\(0.5\\) \\\\ \\(\\beta_{2}\\) & \\(0.9\\) \\\\ Weight decay & \\(10^{-2}\\) \\\\ Batch size & \\(512\\) \\\\ Training epochs & \\(200\\) \\\\ \\hline \\hline Hyper-parameter of discriminator & Value \\\\ \\hline Learning rate & \\(4.5\\times 10^{-6}\\) \\\\ \\(\\beta_{1}\\) & \\(0.5\\) \\\\ \\(\\beta_{2}\\) & \\(0.9\\) \\\\ Weight decay & \\(10^{-2}\\) \\\\ Batch size & \\(512\\) \\\\ Training epochs & \\(200\\) \\\\ Training start step & \\(50000\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 10: Hyperparameters of the Adam optimizer for training frame-wise VAEs and discriminators on \\(N\\)-body MNIST and SEVIR datasets. \\begin{table} \\begin{tabular}{l|c} \\hline \\hline Hyper-parameter of VAE & Value \\\\ \\hline Learning rate & \\(1.0\\times 10^{-3}\\) \\\\ \\(\\beta_{1}\\) & \\(0.9\\) \\\\ \\(\\beta_{2}\\) & \\(0.999\\) \\\\ Weight decay & \\(10^{-5}\\) \\\\ Batch size & \\(64\\) \\\\ Training epochs & \\(1000\\) \\\\ Warm up percentage & \\(10\\%\\) \\\\ Learning rate decay & Cosine \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 11: Hyperparameters of the AdamW optimizer for training LDMs on \\(N\\)-body MNIST and SEVIR datasets. \\begin{table} \\begin{tabular}{l|c} \\hline \\hline Hyper-parameter of VAE & Value \\\\ \\hline Learning rate & \\(1.0\\times 10^{-3}\\) \\\\ \\(\\beta_{1}\\) & \\(0.9\\) \\\\ \\(\\beta_{2}\\) & \\(0.999\\) \\\\ Weight decay & \\(10^{-5}\\) \\\\ Batch size & \\(64\\) \\\\ Training epochs & \\(200\\) \\\\ Warm up percentage & \\(10\\%\\) \\\\ Learning rate decay & Cosine \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 12: Hyperparameters of the AdamW optimizer for training knowledge alignment networks on \\(N\\)-body MNIST and SEVIR datasets. ### Baselines We train baseline algorithms following their officially released configurations and tune the learning rate, learning rate scheduler, working resolution, etc., to optimize their performance on each dataset. We list the modifications we applied to the baselines for each dataset in Table 13. \\begin{table} \\begin{tabular}{l|c|c} \\hline \\hline Model & \\(N\\)-body MNIST & SEVIR \\\\ \\hline \\hline UNet [55] & - & - \\\\ \\hline \\multirow{3}{*}{ConvLSTM [47]} & reverse enc-dec [48] & reverse enc-dec [48] \\\\ & conv\\_kernels = [(7,7),(5,5),(3,3)] & conv\\_kernels = [(7,7),(5,5),(3,3)] \\\\ & deconv\\_kernels = [(6,6),(4,4),(4,4)] & deconv\\_kernels = [(6,6),(4,4),(4,4)] \\\\ & channels=[96, 128, 256] & channels=[96, 128, 256] \\\\ \\hline PredRNN [61] & - & - \\\\ \\hline PhyDNet [11] & - & convcell\\_hidden = [256, 256, 256, 64] \\\\ \\hline E3D-LSTM [60] & - & - \\\\ \\hline \\multirow{3}{*}{Rainformer [1]} & downscaling\\_factors=[2, 2, 2, 2] & downscaling\\_factors=[4, 2, 2, 2] \\\\ & hidden\\_dim=32 & - \\\\ & heads=[4, 4, 8, 16] & - \\\\ & head\\_dim=8 & - \\\\ \\hline Earthformer [8] & - & - \\\\ \\hline \\hline DGMR [41] & - & context\\_steps = 7 \\\\ \\hline \\multirow{2}{*}{VideoGPT [65]} & vqvae\\_n codes = 512 & vqvae\\_downsample = [1, 4, 4] \\\\ & vqvae\\_downsample = [1, 8, 8] \\\\ \\hline \\multirow{3}{*}{LDM [42]} & vae: \\(64\\times 64\\times 1\\to 16\\times 16\\times 3\\) & vae: \\(128\\times 128\\times 1\\to 16\\times 16\\times 4\\) \\\\ & conv\\_dim = 3 & conv\\_dim = 3 \\\\ & model\\_channels = 256 & model\\_channels = 256 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 13: Implementation details of baseline algorithms. Modifications based on the officially released implementations are listed according to different datasets. “-” means no modification is applied. “reverse enc-dec” means adopting the reversed encoder-decoder architecture proposed in [48]. Other terms listed are the hyperparameters in their officially released implementations. Derivation of the Approximation to Knowledge Alignment Guidance We derive the approximation to the knowledge alignment guided denoising transition (5) following [4]. We rewrite (5) to (8) using a normalization constant \\(Z\\) that normalizes \\(Z\\int e^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_{0}(y)\\|}dz_{t}=1\\): \\[p_{\\theta,\\phi}(z_{t}|z_{t+1},y,\\mathcal{F}_{0})=p_{\\theta}(z_{t}|z_{t+1},z_{ \\text{cond}})\\cdot Ze^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_ {0}(y)\\|}. \\tag{8}\\] In what follows, we abbreviate \\(\\mu_{\\theta}(z_{t+1},t,z_{\\text{cond}})\\) as \\(\\mu_{\\theta}\\), and \\(\\Sigma_{\\theta}(z_{t+1},t,z_{\\text{cond}})\\) as \\(\\Sigma_{\\theta}\\) for brevity. We use \\(C_{i},i=\\{1,\\ldots,7\\}\\) to denote constants. \\[p_{\\theta}(z_{t}|z_{t+1},z_{\\text{cond}}) =\\mathcal{N}(\\mu_{\\theta},\\Sigma_{\\theta}), \\tag{9}\\] \\[\\log p_{\\theta}(z_{t}|z_{t+1},z_{\\text{cond}}) =-\\frac{1}{2}(z_{t}-\\mu_{\\theta})^{T}\\Sigma_{\\theta}^{-1}(z_{t}- \\mu_{\\theta})+C_{1},\\] \\[\\log Ze^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_ {0}(y)\\|} =-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_{0}(y)\\|+C_{2},\\] By assuming that \\(\\log Ze^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_{0}(y)\\|}\\) has low curvature compared to \\(\\Sigma_{\\theta}^{-1}\\), which is reasonable in the limit of infinite diffusion steps (\\(\\|\\Sigma_{\\theta}\\|\\to 0\\)), we can approximate it by a Taylor expansion at \\(z_{t}=\\mu_{\\theta}\\) \\[\\log Ze^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_ {0}(y)\\|} \\approx-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_{0}(y) \\|_{z_{t}=\\mu_{\\theta}} \\tag{10}\\] \\[-(z_{t}-\\mu_{\\theta})\\lambda_{\\mathcal{F}}\ abla_{z_{t}}\\|U_{ \\phi}(z_{t},t,y)-\\mathcal{F}_{0}(y)\\|\\|_{z_{t}=\\mu_{\\theta}}\\] \\[=(z_{t}-\\mu_{\\theta})g+C_{3},\\] where \\(g=-\\lambda_{\\mathcal{F}}\ abla_{z_{t}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_{0}(More Quantitative Results on SEVIR ### Quantitative Analysis of BIAS on SEVIR Similar to Critical Success Index (CSI) introduced in Sec. 3.2, BIAS \\(=\\frac{\\#\\texttt{Bias-}\\#\\texttt{F.Alarms}}{\\#\\texttt{Bias-}+\\#\\texttt{F.Alarms}}\\) is calculated by counting the \\(\\#\\texttt{Hits}\\) (truth=1, pred=1), \\(\\#\\texttt{Misses}\\) (truth=1, pred=0) and \\(\\#\\texttt{F.Alarms}\\) (truth=0, pred=1) of the predictions binarized at thresholds \\([16,74,133,160,181,219]\\). This measurement assesses the model's inclination towards either F.Alarms or Misses. The results from Table 14 demonstrate that deterministic spatiotemporal forecasting models, such as UNet [55], ConvLSTM [47], PredRNN [61], PhyDNet [11], E3D-LSTM [60], and Earthformer [8], tend to produce predictions with lower intensity. These models prioritize avoiding high-intensity predictions that have a higher chance of being incorrect due to their limited ability to handle such uncertainty effectively. On the other hand, probabilistic spatiotemporal forecasting baselines, including DGMR [41], VideoGPT [65] and LDM [42], demonstrate a more daring approach by predicting possible high-intensity signals, even if it results in lower CSI scores, as depicted in Table 2. Among these baselines, PreDiff achieves the best performance in BIAS. It consistently achieves BIAS scores closest to \\(1\\), irrespective of the chosen threshold. These results demonstrate that PreDiff has effectively learned to unbiasedly capture the distribution of intensity. ### CSI at Varying Thresholds on SEVIR We include representative deterministic methods ConvLSTM and Earthformer, and all studied probabilistic methods to compare CSI, CSI, CSI-pool14 and CSI-pool16 at varying thresholds. It is important to note that CSI tends to favor conservative predictions, especially in situations with high levels of uncertainty. To ensure a fair comparison, we calculated the CSI scores by averaging the samples for each model, while scores in other metrics are averaged over the scores of each sample. The results presented in Table 15, 16, 17 demonstrate that our PreDiff achieves competitive CSI scores and outperforms baselines in CSI scores at pooling scale \\(4\\times 4\\) and \\(16\\times 16\\), particularly at higher thresholds. More Qualitative Results on \\(N\\)-body MNIST Fig. 6 to Fig. 13 show several sets of example predictions on the \\(N\\)-body MNIST test set. In each figure, visualizations from top to bottom are context sequence \\(y\\), target sequence \\(x\\), predictions by ConvLSTM [47], Earthformer [8], VideoGPT [65], LDM [42], PreDiff, PreDiff-KA. E.MSE denotes the average error between the total energy (the sum of kinetic energy and potential energy) of the predictions \\(E(\\widehat{x}^{j})\\) and the total energy of the last step context \\(E(y^{L_{n}})\\). Figure 8: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “0” in the last frame. Figure 9: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “8” in the last frame. Figure 11: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “1” in the last frame. Figure 10: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “4” in the last frame. Figure 12: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “7” in the last frame. Figure 13: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “7” in the last frame. More Qualitative Results on SEVIR Fig. 14 to Fig. 19 show several sets of example predictions on the SEVIR test set. In subfigure (a) of each figure, visualizations from top to bottom are context sequence \\(y\\), target sequence \\(x\\), predictions by ConvLSTM [47], Earthformer [8], VideoGPT [65], LDM [42], PreDiff, PreDiff-KA. In subfigure (b) of each figure, visualizations from top to bottom are context sequence \\(y\\), target sequence \\(x\\), predictions by PreDiff-KA with anticipated average future intensity \\(\\mu_{\\tau}+n\\sigma_{\\tau}\\), \\(n=4,2,0-2,-4\\). Figure 16: A set of example predictions on the SEVIR test set. (a) Comparison of PreDiff with baselines. (b) Predictions by PreDiff-KA under the guidance of anticipated average intensity. Figure 17: A set of example predictions on the SEVIR test set. (a) Comparison of PreDiff with baselines. (b) Predictions by PreDiff-KA under the guidance of anticipated average intensity. Figure 19: A set of example predictions on the SEVIR test set. (a) Comparison of PreDiff with baselines. (b) Predictions by PreDiff-KA under the guidance of anticipated average intensity. Figure 18: A set of example predictions on the SEVIR test set. (a) Comparison of PreDiff with baselines. (b) Predictions by PreDiff-KA under the guidance of anticipated average intensity.
Earth system forecasting has traditionally relied on complex physical models that are computationally expensive and require significant domain expertise. In the past decade, the unprecedented increase in spatiotemporal Earth observation data has enabled data-driven forecasting models using deep learning techniques. These models have shown promise for diverse Earth system forecasting tasks. However, they either struggle with handling uncertainty or neglect domain-specific prior knowledge; as a result, they tend to suffer from averaging possible futures to blurred forecasts or generating physically implausible predictions. To address these limitations, we propose a two-stage pipeline for probabilistic spatiotemporal forecasting: 1) We develop PreDiff, a conditional latent diffusion model capable of probabilistic forecasts. 2) We incorporate an explicit knowledge alignment mechanism to align forecasts with domain-specific physical constraints. This is achieved by estimating the deviation from imposed constraints at each denoising step and adjusting the transition distribution accordingly. We conduct empirical studies on two datasets: \\(N\\)-body MNIST, a synthetic dataset with chaotic behavior, and SEVIR, a real-world precipitation nowcasting dataset. Specifically, we impose the law of conservation of energy in \\(N\\)-body MNIST and anticipated precipitation intensity in SEVIR. Experiments demonstrate the effectiveness of PreDiff in handling uncertainty, incorporating domain-specific prior knowledge, and generating forecasts that exhibit high operational utility.
Summarize the following text.
arxiv/83b578ed_5f25_466b_89d1_6fe0e12a2701.md
"# Self-Supervised Super-Resolution for Multi-Exposure Push-Frame Satellites\n\nNgoc Long Nguyen\\\\(...TRUNCATED)
"Modern Earth observation satellites capture multi-exposure bursts of push-frame images that can be (...TRUNCATED)
Give a concise overview of the text below.
arxiv/a06a5d7d_1cbd_4427_ab47_1633dd79ec33.md
"# Density Invariant Contrast Maximization for Neuromorphic Earth Observations\n\nSami Arja*,1, Alex(...TRUNCATED)
"Contrast maximization (CMax) techniques are widely used in event-based vision systems to estimate t(...TRUNCATED)
Write a summary of the passage below.
arxiv/a6063ac9_4b35_45e0_87b0_d962d9fc87e2.md
"# Nanostructure-modulated planar high spectral resolution spectro-polarimeter\n\nL. P. Stoevelaar\n(...TRUNCATED)
"We present a planar spectro-polarimeter based on Fabry-Perot cavities with embedded polarization-se(...TRUNCATED)
Summarize the following text.
arxiv/bd4d25a7_a0fe_40c6_8655_40e62b899c74.md
"# Challenges in data-based geospatial modeling for environmental research and practice\n\nDiana Kol(...TRUNCATED)
"With the rise of electronic data, particularly Earth observation data, data-based geospatial modell(...TRUNCATED)
Condense the content of the following passage.
arxiv/c5c8a123_ab01_409e_8f3a_71a6a07c0956.md
"On the impact of key design aspects in simulated Hybrid Quantum Neural Networks for Earth Observati(...TRUNCATED)
"Quantum computing has introduced novel perspectives for tackling and improving machine learning tas(...TRUNCATED)
Write a summary of the passage below.
arxiv/c74d07a0_96da_498e_bbf4_2f7c976d0a60.md
"# Computationally-Efficient Climate Predictions using Multi-Fidelity Surrogate Modelling\n\nBen Hud(...TRUNCATED)
"Accurately modelling the Earth's climate has widespread applications ranging from forecasting local(...TRUNCATED)
Condense the content of the following passage.
arxiv/d0e8b859_7544_48af_bd3f_b17c49218b39.md
"# Knowledge-aware Text-Image Retrieval for Remote Sensing Images\n\nLi Mi, _Student Member, IEEE_, (...TRUNCATED)
"Image-based retrieval in large Earth observation archives is challenging because one needs to navig(...TRUNCATED)
Provide a brief summary of the text.
arxiv/ecd5d284_3a3b_40d0_bfbd_62c797afb2c9.md
"Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-ba(...TRUNCATED)
"Improvements in Earth observation by satellites allow for imagery of ever higher temporal and spati(...TRUNCATED)
Condense the content of the following passage.
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
43